playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_411_Product_of_Exponentials_Formula_in_the_Space_Frame.txt
Chapter 4 addresses the forward kinematics of open-chain robots, as illustrated in this video of a robot with 6 revolute joints. We define a frame {s} fixed in space, often at the base of the robot, and a frame {b} at the end-effector of the robot arm. If we command the robot to move, the {b}-frame moves. The forward kinematics problem is to find the configuration of the {b}-frame relative to the {s}-frame given the vector of joint angles theta. The transformation matrix representing the {b}-frame in the {s}-frame is T_sb of theta, or just T of theta for short. To derive a procedure to calculate T of theta, let's use a simple robot arm that moves in a plane. This robot has three joints: a revolute joint, a prismatic joint, and another revolute joint. You can also see the stationary {s}-frame and the end-effector {b}-frame. This is a stick figure version of the robot. The joint variable theta_1 represents the angle of joint 1 relative to the horizontal. Theta_2 represents the extension of the prismatic joint, and theta_3 represents the angle of joint 3. The 3-vector theta is a list of the three joint variables, and T of theta represents the configuration of the {b} frame relative to the {s} frame. If we set all of the joint variables equal to zero, the robot is in its home position, as shown here. We write the zero configuration of the {b} frame as T of zero, or simply M for short. At the zero configuration, the {b}-frame has the same orientation as the {s}-frame, and the 3 in the top right element of the M matrix means that the {b}-frame is 3 units from the {s}-frame in the xs-direction. Now say we rotate joint 3 by pi over 4 radians. The theta vector is now zero, zero, pi over 4. This motion of the {b} frame can be represented by a rotation about the screw axis of joint 3. Since it is a revolute joint with no translational motion, the screw axis has zero pitch. Because positive rotation is in the direction indicated in the figure, by the right-hand rule, the screw axis is out of the screen, toward you. As we learned in chapter 3 videos, a screw axis can be represented in any frame. Let's represent it in the {s}-frame, and let's call joint 3's screw axis S_3, consisting of an angular component omega_3 and a linear component v_3. Since the screw axis involves rotation, omega3 must be a unit vector. Positive rotation is about an axis out of the screen, which is aligned with the {s} frame's z-axis. Therefore, the unit angular component is zero zero one, a unit vector aligned with the zs-axis. To visualize the linear component v_3, imagine the entire space rotating about joint 3, visualized here as a turntable. Then v_3 is the linear velocity of a point at the origin of the {s}-frame when the turntable rotates with unit angular velocity, as shown here. So, v_3 is zero, minus 2, zero, meaning that the origin has a velocity of 2 units in the minus ys-direction. V_3 could also be calculated as minus omega_3 cross q_3, where q_3 is any point on the joint axis represented in the {s}-frame. Now that we have the screw axis S_3, we can calculate the {b}-frame configuration T of theta. We simply apply the space-frame transformation corresponding to motion along the S_3 screw axis by an angle pi over 4. This transformation is e to the bracket S3 times pi over 4 using the matrix exponential from the chapter 3 videos. Since it is a space-frame transformation, it premultiplies M. Now suppose we change joint 2, extending it by 0.5 units of distance. The theta vector is now zero, 0.5, pi over 4. The screw axis S_2 corresponding to joint 2 has zero angular component omega_2, so the linear component v_2 must be a unit vector. If we imagine the whole space translating at unit velocity along joint 2, a point at the origin of the {s}-frame would move with a linear velocity v_2 equal to one, zero, zero, expressed in the {s}-frame. Therefore the screw axis S2 is defined as zero, zero, zero, one, zero, zero. The new configuration of the {b}-frame, T of theta, is obtained by left-multiplying the previous configuration by e to the bracket S_2 times theta2. It's important to notice that the previous motion of joint 3 does not affect the relationship of joint 2's screw axis to the {s}-frame. That's because joint 3 is not between joint 2 and the {s}-frame. Therefore, S_2 is the same as the screw axis of joint 2 when the robot is at its zero configuration. Finally, let's rotate joint 1 by pi over 6. The theta vector is now pi over 6, 0.5, pi over 4. The screw axis S1 is a pure rotation about an axis out of the screen, so the omega_1 vector is zero, zero, one. Rotation about this axis does not cause any linear motion at the origin of the {s}-frame, so the v_1 vector is zero, zero, zero. The new configuration of the {b}-frame, T of theta, is again given by left-multiplying the previous configuration by the new space-frame transformation. Again, the previous motions of joints 2 and 3 do not affect the relationship of joint 1's screw axis to the {s}-frame, because they are not in between joint 1 and the {s}-frame. Therefore, S_1 is the same as the screw axis of joint 1 when the robot is at its zero configuration. For any serial robot, the procedure generalizes directly. First, define the M matrix representing the {b}-frame when the joint variables are zero. Second, define the {s}-frame screw axes S_1 to S_n for each of the n joint axes when the joint variables are zero. Finally, for the given joint values, evaluate the product of exponentials formula in the space frame. In the next video we will see an alternative version of this formula, in terms of screw axes expressed in the {b}-frame.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_5_Velocity_Kinematics_and_Statics.txt
In the last chapter, we studied the forward kinematics, relating the joint positions to the end-effector configuration, T of theta. In this chapter, we study the relationship between joint velocities and the end-effector velocity. Usually we represent the configuration of the end-effector as a transformation matrix and the velocity as a twist. To get to the main ideas quickly, though, lets represent the configuration of the end-effector using a minimum set of coordinates and the velocity as the time derivative of those coordinates. Then the forward kinematics can be written x equals f of theta, where x is a vector of m coordinates representing the end-effector configuration and theta is an n-vector of joint coordinates. To find the relationship between the joint velocities and the end-effector velocity, we take the time derivative of the forward kinematics. Applying the chain rule for differentiation and dropping the dependence on time, we get x-dot = d-f d-theta times theta-dot. Because x has m coordinates and the joint vector theta has n coordinates, the matrix of partial derivatives d-f d-theta is m-by-n. This matrix is called the Jacobian J of the robot arm. The Jacobian is important not only for relating joint velocities to end-effector velocities, but also for relating end-effector wrenches to joint forces and torques, as we will see soon. Let's look at a 2R arm as an example, where x_1 and x_2 are the end-effector coordinates. The forward kinematics are given by these equations. We take the time derivative of the forward kinematics, and we name the endpoint velocity v_tip. Notice that the vector equation can be written as a vector times the velocity of the first joint theta_1-dot plus another vector times the velocity of a second joint theta_2-dot. Let's call the first vector J_1. Clearly it is a function of the joint variables theta. J_1 is just the end-effector velocity when joint 1 rotates at unit speed while joint 2 is kept constant. We can visualize it as a vector orthogonal to the line connecting joint 1 to the end-effector. Similarly, J_2 is the velocity of the end-effector when joint 2 rotates at unit speed while joint 1 is kept constant. Plotting them both, we see that J_1 and J_2 form a basis for the space of linear velocities of the end-effector. We can put the J_1 and J_2 vectors side-by-side to form the Jacobian matrix J. The end-effector velocity is just a linear combination of J_1 and J_2, with coefficients equal to the joint velocities. If the second joint angle is zero degrees, as shown here, or 180 degrees, J_1 and J_2 are aligned, and it is impossible to generate any end-effector velocity except along this line. I can use my arm as an example. When it is fully extended, rotation about my shoulder, and about my elbow, both cause the hand to move vertically. The arm loses the ability to move in some directions when the dimension of the column space of the Jacobian drops from its maximum value, and such a configuration is called a singularity. We can use the Jacobian to find limits on the end-effector velocity due to limits on the joint velocity. In this figure, the set of allowable joint velocities is depicted as a square in the joint velocity space, with the corners A, B, C, and D. The point A, for example, corresponds to simultaneous maximum positive velocity at both joints 1 and 2. We can map this square set of joint velocities through the Jacobian to get a parallelogram of possible velocities at the tip, including the tip velocity A corresponding to the joint velocity A. The Jacobian, and therefore this parallelogram, depends on the joint angles theta. Instead of a square of possible joint velocities, it is common to consider a circle, or more generally a sphere, of possible joint velocities. This maps through the Jacobian to produce an ellipse, or more generally an ellipsoid, of possible tip velocities. This ellipsoid shows that the robot can move fast up and to the left, but only slowly up and to the right. If the robot is at a different configuration, however, the ellipsoid can look very different. We call these ellipsoids manipulability ellipsoids. If the 2R robot is at a singularity, the ellipse collapses to a line segment. The Jacobian also relates forces at the end-effector to forces and torques at the joints. To find this relationship, let tau be the vector of joint torques and forces generated by motors at revolute and prismatic joints, respectively. From physics we know that velocity times force is power, and the equivalent for a robot arm is theta-dot transpose times tau is equal to the power produced or consumed by the robot's motors. The robot's velocity and joint torques could instead be expressed in terms of a velocity and force at the tip, so the power can be written equivalently as v_tip transpose times f_tip, where f_tip is the force applied by the end-effector. If no power is used to move the robot, then theta-dot transpose times tau is equal to v_tip transpose times f_tip. Using our previously derived identity v_tip equals J-theta-dot, we get this equation. We can rewrite this as this equation using the fact that the transpose of J-theta-dot is equal to theta-dot transpose times J transpose. Since this equation holds for all theta-dot, the equation reduces to tau equals J transpose times f_tip, the relationship we were looking for. For our force analyses, we assume the robot is at equilibrium and that all joint torques and forces create forces at the end effector; no joint effort is needed to cancel gravity, for example. This equation is useful for force control: if we want the robot to generate the force f_tip at its end-effector, the motors must generate joint torques and forces equal to J transpose times f_tip. In the case that J transpose is invertible, we also have the relationship f_tip equals J transpose inverse times tau. Assuming J transpose is invertible, we can map joint torque limits to tip force limits using the inverse of the Jacobian transpose, similar to how we mapped joint velocity limits to tip velocity limits. In this configuration, the arm can apply large forces up and to the right, at point D, but much smaller forces up and to the left. The reason is clear: a line of force up and to the right passes close to the joints, therefore requiring little torque about the joints, while a line of force up and to the left passes far from the joints, requiring larger torque. In the extreme case, if my arm is extended straight out, at a singularity, I could theoretically resist infinite forces applied along the arm, because the forces pass through the joints, creating zero torque. On the other hand, it is hard to hold this book up in gravity, because the force requires large joint torques. This example also demonstrates that the tip force limits are in a sense reciprocal to the tip velocity limits: it is easy to apply force in a direction that it is hard to move, and hard to apply force in a direction that it is easy to move. It is common to consider joint torque limits that are circular, or more generally spherical. In this case, the torque limits map to an end-effector force ellipse or ellipsoid. As with the manipulability ellipsoid, the force ellipsoid depends on the configuration of the robot. So, this video summarizes the important concepts of Chapter 5. In the rest of this chapter, we translate these concepts to twists and wrenches. In particular, in this video we represented velocities as v_tip, the time derivative of coordinates. In the remainder of the chapter, we represent end-effector velocities as twists, either in the {s}-frame or the {b}-frame. In this video, we represented end-effector forces as f_tip, forces dual to v_tip. In the remainder of the chapter, end-effector forces will be represented as wrenches, either in the {s}-frame or the {b}-frame. In the next video we derive the Jacobian when the end-effector velocity is expressed as a twist in the space frame {s}.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_1211_FirstOrder_Analysis_of_a_Single_Contact.txt
Contact kinematics is the study of the motion constraints due to contact between bodies. For example, if these two bodies are in contact, I could ask what motions of the bodies will keep them in contact and what motions will cause breaking contact. Let's say that q_1 and q_2 are coordinate representations of the rigid-body configurations, q is the combined configuration, and d of q is the distance between the two bodies. We can create a table of possibilities governing whether the two bodies are in contact depending on their trajectories q_1 of t and q_2 of t. If d is greater than zero, the two bodies are not in contact. If d is less than zero, the two bodies are in penetration, and therefore the combined configuration is not allowed. If d is equal to zero, the bodies are currently in contact, but if d-dot is greater than zero, this contact is about to break. If d-dot is less than zero, the bodies are about to penetrate, so the trajectories q_1 of t and q_2 of t are not allowed. If d and d-dot are zero, the bodies are in contact, but if d-double-dot is greater than zero, the contact is about to break. We could continue this analysis for increasing derivatives of d. The bodies only remain in contact if all time derivatives of d are equal to zero. If we assume the bodies are initially in contact, we can express the time derivative of the distance between two bodies as d-dot equals the vector of partial derivatives of d with respect to q times q-dot. The acceleration of d is d-double-dot, which is the sum of the partial derivatives times q-double-dot and a velocity-product term depending on the matrix of second derivatives of d with respect to q. The vector of partial derivatives carries first-order information about the contact geometry, called the contact normal, which I'll define shortly. The matrix of second derivatives carries second-order information about the contact geometry, namely the curvature at the contact. For simplicity, in this chapter we assume that the second-order and higher-order information on the contact geometry is not available, and we focus on first-order contact geometry. I will highlight cases where the effect of this decision has consequences. Consider this planar disk contacted by a stationary constraint. This constraint could be a robot finger, a workpiece fixture, or some other part of the robot or the environment. We define the contact tangent line to be the line tangent to the bodies at the contact. We also define the contact normal n to be a unit vector orthogonal to the tangent line. The contact normal could be defined either upward or downward. Now imagine the disk is in contact with a constraint with a different curvature. The contact normal is the same relative to the disk, and by our first-order analysis, which ignores curvature, the constraints on the disk's motion are identical. If the movable body is a spatial body contacted by another spatial body, the unit normal is orthogonal to the tangent plane. Again, because we ignore curvature, this pencil provides the same motion constraints on the movable body. In this chapter we assume contacts between rigid bodies can be modeled as a finite set of point contacts. A planar contact that looks like this is modeled as two contacts, one on each edge adjacent to the vertex, with these contact normals. A line segment contact is modeled as a contact at each end of the line segment. A planar patch contact is modeled as a set of contacts at the vertices of the planar patch. A degnerate contact like this is not allowed, as there is no uniquely defined tangent plane or contact normal. In the next video we will derive the constraints on the twists of bodies in contact and we will categorize contacts as breaking, sliding, or rolling.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_113_Motion_Control_with_Velocity_Inputs_Part_3_of_3.txt
For the case where the controller commands joint velocities, in the previous video we derived a feedforward plus PI feedback controller when the desired motion is expressed in joint space. Now consider the case where the desired motion is expressed as X_d, the desired SE(3) configuration of the end-effector as a function of time. The corresponding twist V_d at any instant of time is X_d-inverse times X_d-dot, which is the twist of the end-effector expressed in the desired frame of the end-effector. Similarly, the actual motion is defined by the actual configuration X as a function of time and V_b, the end-effector twist expressed in the end-effector frame. Now we can write a task-space version of our feedforward plus PI feedback controller. The controller commands the actual end-effector twist V_b. The feedforward component is the desired twist V_d, but expressed in the actual end-effector frame. To change the frame of reference of the twist, we use the matrix adjoint of the transformation matrix X_bd that expresses the desired configuration relative to the actual configuration. X_bd is calculated as X-inverse times X_d, which we can derive by remembering our subscript cancellation rule and that X_d can be written X_sd and X can be written X_sb, where {s} is the implicit space frame. Next, we add the PI feedback portion of the controller, replacing theta_e with X_e. The configuration error X_e is not an element of SE(3). Just as theta_e represents a vector from the actual joint angles to the desired joint angles, X_e is a twist pointing from the current configuration to the desired configuration. X_bd, which we calculated earlier to be X-inverse times X_d, is the configuration of the desired frame relative to the actual frame, and the log calculates the matrix representation of the twist, expressed in the end-effector frame, that goes from the actual frame to the desired frame in unit time. This twist is multiplied by the proportional gain K_p, and integrated and multiplied by the integral gain K_i, to get the PI feedback portion of the commanded velocity. The final controller can be written like this, a task-space feedforward plus PI feedback control with velocity inputs. The actual joint velocities are calculated using the Jacobian inverse or pseudoinverse. This controller calculates the error between two configurations in terms of a twist. Another option is to decouple the rotational error and the linear error. Consider a configuration represented as a rotation matrix R and a position vector p. Then we can separately calculate the commanded angular velocity omega_b and linear velocity p-dot. The feedforward component for omega_b expresses the desired angular velocity omega_d in a frame actually oriented with the end-effector. The feedforward linear velocity is just the rate of change of the p coordinates. The PI feedback portion of the controller defines a configuration error X_e that has an angular velocity, expressed in the end-effector frame, that takes the end-effector to the desired orientation in unit time, as well as the typical coordinate error for the position p. The resulting control law is a decoupled task-space controller with velocity controls. As an example, assume that the red frame represents a stationary desired configuration, and the green frame represents the actual configuration. If we set the feedforward control and the gain K_i to be zero, the task-space controller that couples angular and linear errors produces a motion about a constant screw axis. The decoupled task-space controller, on the other hand, carries the origin of the frame along a straight-line path. This concludes our study of robot control with velocities as controls, but we will see these control laws again in Chapter 13 when we study wheeled mobile robots. For the rest of this chapter, we assume that the controls are joint torques and forces and consider the robot's dynamics.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_11221_FirstOrder_Error_Dynamics.txt
Let's continue to assume our error dynamics can be expressed as a linear ordinary differential equation, such as this second-order differential equation from the previous video. Let's set the mass equal to zero, giving us this first-order differential equation, which says that the force due to the spring and the force due to the damper always sum to zero. We define the time constant b divided by k and rewrite the error dynamics in this standard first-order form, theta_e-dot plus 1 over the time constant times theta_e equals zero. This error differential equation is stable if the time constant is positive. If the time constant is negative, perhaps because of a negative stiffness spring, the differential equation is unstable and initial error grows with time. The solution to this differential equation is theta_e equals e to the minus t over the time constant times the initial error theta_e at time zero. The unit step error response, where theta_e at time zero is equal to 1, can be plotted as a decaying exponential of time. The decay gets faster as the time constant decreases, either because the spring gets stiffer or the damper gets softer. The steady-state error is zero and the overshoot is zero. The 2 percent settling time, meaning the time for the error to decay to 2 percent of its initial value, is determined by solving for the time t satisfying this equation. Taking the natural log of both sides, we see that the error decays to 2 percent of its initial value after approximately 4 time constants. In the next video, we will consider the case of a second-order error differential equation.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_82_Dynamics_of_a_Single_Rigid_Body_Part_2_of_2.txt
In the previous video, we derived these equations of motion for a rigid body, for wrenches, twists, and accelerations defined in a body frame {b}. In this video, we express these equations in a form analogous to the equations of motion for a body that only rotates. To do this, we define the 6-by-6 spatial inertia matrix G_b. The top left 3-by-3 submatrix is the inertia matrix I_b, and the bottom right 3-by-3 submatrix is a diagonal matrix, the mass of the body multiplied by the 3-by-3 identity matrix. With this, the kinetic energy of a rotating and translating rigid body can be written as the sum of the rotational kinetic energy and the linear kinetic energy, or simply one-half times the body twist transpose times G_b times the body twist. Also, just like the mass matrix of a robot and the rotational inertia matrix I_b, the spatial inertia matrix G_b is symmetric and positive definite. Before proceeding further, we need to define an operation on 6-dimensional twists that is analogous to the cross-product operation on 3-dimensional vectors. Remember that we use the bracket notation to write the cross product of omega_1 and omega_2 in R3. The result is a 3-vector. The 3-by-3 little so(3) matrix form of this cross product is bracket-omega_1 times bracket-omega_2 minus bracket-omega_2 times bracket-omega_1. For 6-dimensional twists, our analogy to the cross product is bracket-V_1 times bracket-V_2 minus bracket-V_2 times bracket-V_1, a 4-by-4 matrix in little se(3). In vector form, this is little-adV_1 times V_2, where the little adjoint of a twist V is defined by the 6-by-6 matrix shown here. Little-adV_1 times V_2 is called the Lie bracket of V_1 and V_2. The Lie bracket of V_1 and V_2 is an acceleration, measuring how motion along the twist V_2 would change if the body follows the twist V_1. The matrix form of the Lie bracket is analogous to the matrix form of the cross product. Returning now to our equations of motion, after a little manipulation we find that these equations can be expressed as this 6-vector equation: the wrench F_b equals G_b times V_b-dot minus little-adV_b-transpose times G_b times V_b. Notice that the second term is a velocity-product term. This equation is analogous to the equation for a rotating rigid body, replacing the rotational inertia matrix I_b with the spatial inertia matrix G_b, the angular velocity omega_b with the twist V_b, and the cross product with omega_b with a Lie bracket with V_b. Now that we have the equation of motion in the {b} frame, we could ask what the equation of motion is in a different frame, {a}. Equating the kinetic energy expressed in each frame, we find an expression for the 6-by-6 spatial inertia matrix expressed in the {a} frame, G_a, in terms of the G_b matrix and the transform T_ba expressing the {a} frame in the {b} frame. With this inertia matrix, we find that the equation of motion in terms of the wrench F_a, the spatial inertia matrix G_a, and the twist V_a has the same form as it does in the {b} frame. We usually prefer to write the equations in terms of a center of mass frame {b}, however. To wrap up, we have derived the inverse dynamics for a rigid body: given the twist and acceleration, we can calculate the wrench needed to generate this motion. We can also write fsthe forward dynamics, which takes the current twist and the applied wrench and calculates the acceleration. In the next video we will use these results to derive the recursive Newton-Euler algorithm for the inverse dynamics of a robot.
Modern_Robotics_All_Videos
Modern_Robotics_Introduction_to_the_Lightboard.txt
Welcome to the video supplements for the book Modern Robotics. My name is Kevin Lynch, and my co-author is Frank Park. We're shooting these videos using the Lightboard, a tool for video creation invented by Professor Michael Peshkin here at Northwestern. In front of me is a pane of glass that I can write on. On the other side of the glass is the camera. Unless you think I'm left-handed and really good at writing backwards, you've probably guessed that you're seeing the video image after it has been reversed, left-to-right. I'm actually right-handed, and I'm writing so that I can read the text. If this were a chalkboard in a classroom, I would turn around to address the class. With the glass and the video reversal, I can look at you and the board at the same time. This is an unreversed photograph of Professor Peshkin writing on the Lightboard. Throughout these videos I will be referring to rotation about an axis according to the right-hand rule. By this rule, positive rotation corresponds to the direction that the fingers of your right hand curl when your thumb points along the axis. To make it look right to you, I'm using my left hand. Chapter 1 is just an overview of the book, so let's dive right into Chapter 2 on configuration spaces.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_113_Motion_Control_with_Velocity_Inputs_Part_2_of_3.txt
In the previous video we saw that, for the task of tracking a trajectory with a constant velocity, a proportional controller results in nonzero steady-state error c over K_p. To fix this, let's augment the P controller with another term that is proportional to the integral of the error over time. K_i is called the integral gain, since it multiplies the time integral of the error. We can calculate this time integral numerically in the control computer. We call this new controller proportional-integral control, or PI control for short. Plugging this controller into the error dynamics, we get this equation for the time-evolution of the error. To turn it into a differential equation, we can take the derivatives of both sides. We now have a standard second-order homogeneous differential equation, where the damping ratio is K_p over 2 times the square root of K_i and the natural frequency is the square root of K_i. By our mass-spring-damper analogy from earlier videos, K_i plays the role of the spring and K_p plays the role of the damper. If K_i and K_p are both positive, then the error dynamics are stable and the steady-state error is zero. In other words, the commanded velocity is nonzero when the error is zero because the integral of the error is not zero. The characteristic equation of the error dynamics is s-squared plus K_p s plus K_i equals zero, which means the roots are given by this equation. We can plot these roots in the complex plane. Here, the roots marked One correspond to overdamped error dynamics, where K_p is large relative to K_i. Here is a plot of the overdamped error response. If we increase the gain K_i until it equals K_p-squared over 4, so that the term in the square root is zero, we get a critically damped response, indicated as Two. Increasing the gain K_i pushes the two roots toward each other until critical damping, where both roots are located at minus K_p over 2. Note that the critically damped response is faster than the overdamped response. If we continue to increase the gain K_i, the term in the square root becomes negative, and the two roots become complex conjugates, moving away from each other in the vertical direction. The new roots, and the new response, are marked Three. Since the real values of the roots are unchanged, the settling time is unchanged, but we now see overshoot and oscillation in the error response. Of the three error responses, the critically damped response, marked Two, is the best, since it is fast and has no overshoot. In general, critical damping is a good goal for second-order error dynamics. The path that the two roots trace as we change the control gain K_i is called a "root locus." A root locus plots the roots as we change a single parameter, such as the gain K_i in this example. We can use the root locus to help us choose control gains. We would like to keep the roots far to the left, for fast settling, and close to the real axis, to minimize overshoot. As discussed before, though, there are limits as to how large we can choose the gains. Let's look at an example of P and PI control applied to tracking a trajectory with a constant velocity. The dashed line represents the desired position theta_d as a function of time. The actual initial position, theta at time zero, has some error, as shown by the dot. The P controller by itself cannot track the trajectory; in steady state, it always lags behind the desired position by c over K_p, as we calculated earlier. We can also see this in the error response. On the other hand, if we add an integral term, we see the PI controller achieves zero steady-state error. Here we've chosen the PI controller to be underdamped; a better PI controller would eliminate overshoot and achieve critical damping by choosing a lower gain K_i or a larger gain K_p. In summary, P control eliminates steady-state error for setpoint control, and PI control eliminates steady-state error for any trajectory with constant velocity. A PI controller cannot eliminate steady-state error for arbitrary trajectories, but a properly tuned PI controller should provide good tracking for many practical trajectories. A PI control system can be visualized as a block diagram, as shown here. The actual position is subtracted from the reference position to get the error, and this error is multiplied by the gain K_p, and integrated and multiplied by the gain K_i. These terms are then summed to produce the commanded joint velocity. The robot moves and a sensor returns the actual position to the controller. One problem with this control law is that the robot never moves until there is an error to force it to move. Since we know the desired trajectory, we should be able to move the robot without waiting for error to accumulate. We can augment this control law by adding a feedforward term. If the error is zero, the commanded velocity is just the desired velocity. This is our final preferred control law if the commanded controls are velocities. Until now we've been discussing a robot with a single joint, but the control law is unchanged for a multi-joint robot. Each joint has this same control law governing it. We can write the scalar equation for each joint as a single vector equation by treating theta, theta_d, and theta_e as vectors, and treating K_p and K_i each as an identity matrix times a positive scalar. The control law I've just described is expressed in terms of joint trajectories. But sometimes it's more convenient to express the desired motion in terms of the motion of the end-effector. This leads to task-space motion control, the topic of the next video.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_1333_Motion_Planning_for_Nonholonomic_Mobile_Robots.txt
Most path planners in Chapter 10 can be applied to omnidirectional mobile robots, because of their ability to move in any direction. The same is not true for nonholonomic mobile robots, due to their motion constraints. In this video we'll look at optimal motion plans for car-like robots in an obstacle-free plane, as well as motion planning among obstacles. Let's start with a car with no reverse gear. A typical path looks like this. Our goal is to find paths that minimize the length of the curve followed by the point midway between the rear wheels. Let C represent a circular arc that the car follows when it turns at its minimum turning radius, either to the right or to the left. And let C greater than pi represent such arcs that travel an angle of at least pi. Finally, let S represent the straight-ahead motion of the car. Then it can be shown that all shortest paths between two configurations are either of the form C, S, C, or C, C greater than pi, C, where any of the C or S segments could be of length zero. These are called Dubins curves in honor of the mathematician who proved this result. Here are two examples. In the first animation, the shortest path to the goal is a CSC path. In the second animation, the shortest path has the form C, C greater than pi, C. Now let's consider a car with a reverse gear. A result due to Reeds and Shepp says that all shortest paths belong to one of nine classes of paths, consisting of circular segments at the minimum turning radius, straight-line segments, and direction reversals, also called cusps. The details of the nine classes are in the book. Here are examples from three of the nine path classes. The first shortest path is a CSC path. The second path reverses the car's orientation using two cusps. The third shortest path has a single cusp. Dubins curves and Reeds-Shepp curves allow us to consider only a finite number of possible paths when planning the shortest path between two configurations in an obstacle-free plane. Reeds-Shepp curves can also be useful in motion planning for a car among obstacles. Given the start and goal configurations, first we can try connecting them by a Reeds-Sheep curve. If the path is in collision, then we can plan a free path between the two configurations using any path planner, ignoring the car's motion constraints. Provided this path does not graze any obstacle, then, because the car is small-time locally controllable everywhere, even though the car cannot follow the path exactly, it can follow it arbitrarily closely. To transform this infeasible path to a feasible path, first we can divide the path in half and try using Reeds-Shepp curves to connect q-zero to q-one-half, and q-one-half to q-one. The Reeds-Shepp path from q-one-half to q-one is collision-free, but the Reeds-Shepp path from q-zero to q-one-half is not. So we subdivide the first path segment again and find the Reeds-Shepp paths between q-zero and q-one-quarter and between q-one-quarter and q-one-half. These paths are both collision-free, so we have our final path. This subdivision process is guaranteed to converge to a collision-free path because: one, the original path has some free space around it; two, the car is small-time locally controllable, so it can follow the original path arbitrarily closely; and three, the Reeds-Shepp paths are short, so the distance from the original path goes to zero as the distance between the subdivision points goes to zero. Once we have a path for the robot, we can convert it to a trajectory by applying a time scaling subject to the robot's velocity limits. For a diff-drive robot, the shortest path in the obstacle-free plane for the point midway between the two wheels is trivial: spin in place, translate, then spin in place. A more interesting problem is to find the time-optimal motion if each wheel's speed is limited. This problem is discussed in the book. All of the optimal motion results discussed in this video can be derived using techniques from optimal control theory.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_52_Statics_of_Open_Chains.txt
Here you can see a 6R robot with a frame {b} at the hand. Imagine that the joints are moving according to a trajectory theta of t. The changing joint angles theta as a function of time move the hand along the path shown in yellow. The hand is moving in free space, so it is applying no forces to the environment. In Chapter 8, when we study the inverse dynamics of a robot, we will learn how the trajectory theta of t can be turned into the torques required to move the robot along the trajectory. We call these torques tau-motion of t. Now assume we choose a particular time instant t, and let tau-motion be the joint torques at this instant. Now assume that someone applies a wrench to the hand at this instant. Perhaps someone grabbed the hand of the robot. We will call this wrench minus F_b, consisting of three angular moments and three linear forces expressed in the {b} frame. If we want the robot to continue to track the planned trajectory, despite this disturbance wrench, the robot's motors must create a wrench F_b to balance the disturbance wrench. Therefore, the joint torques should be tau-motion plus tau, where we need to know how Fb relates to tau. To find this relationship, recall from physics that force times velocity is power. In the {b} frame, the wrench F_b created by the motors multiplies the twist V_b to get the mechanical power produced or consumed at the hand. This power must be coming from the motors, and we know that the power produced or consumed by the motors is the joint torques dotted with the joint velocities. If we plug in the identity J_b theta-dot equals V_b, and recognize that the equality must hold at all theta-dot, we get this equation, and getting rid of the transposes we get the relationship we were looking for, tau equals J_b-transpose times F_b. The exact same derivation holds for wrenches and Jacobians expressed in the space frame {s}, so we can generalize to the following main result of this video: To resist a wrench minus F applied to the end-effector at a configuration theta, the joint torques and forces tau must be J of theta transposed times F. This result holds no matter what frame the Jacobian and wrench are expressed in. This relationship can be useful in force control of a robot: if we want the end-effector to apply a wrench F to the environment, we use this formula to calculate the joint forces and torques tau. In the next video we will consider the implications of non-square and singular Jacobian matrices.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_54_Manipulability.txt
A robot configuration is either singular or it's not. But even if a configuration is nonsingular, it still may be close to being singular. The manipulability ellipsoid we saw in the first video of this chapter is one way to visualize how close a robot is to being singular. For this 2_R robot, a circle of velocities in the joint space maps through the Jacobian to an ellipse of velocities at the tip of the robot depending on the robot's configuration. As the second joint angle approaches zero, the ellipse squashes in the direction it is difficult to move and stretches in the orthogonal direction, until, at the singularity, the ellipse collapses to a line segment. As we will see shortly, we can assign a measure of just how close the robot is to being singular according to how close the ellipse is to collapsing. To derive a general expression for the end-effector velocity ellipsoid, let's begin by assuming the end-effector's velocity is represented as the m-dimensional vector v_tip. The robot has n joints, so the Jacobian is an m by n matrix. A sphere of joint velocities, like the circle shown here, is defined by the equation theta-dot transpose times theta-dot equals 1. If we assume the Jacobian is invertible, which is not strictly necessary, then we can rewrite the equation as shown here. Rearranging, we get this, and rearranging once more, we get this. We summarize this equation as v_tip-transpose times A-inverse times v_tip equals 1, where A is the m-by-m matrix J times J-transpose. The A matrix is both symmetric and positive definite, and so is its inverse. Now assume we take this same equation but replace v_tip by a generic vector x. The eigenvalues of the matrix A are called lambda_1 to lambda_m, and the corresponding eigenvectors are v_1 to v_m. It is well known that the quadratic equation x-transpose times A-inverse times x equals 1 defines an ellipsoid of x values that satisfy the equation. In general, this ellipsoid is an m-minus-1-dimensional surface in the m-dimensional space of x, but this figure shows the case where x is a 3-vector. The principal axes of the ellipsoid are aligned with the eigenvectors of A and the half-lengths of the ellipsoid along the principal axes are the square roots of the eigenvalues. This geometric interpretation holds for any symmetric positive definite matrix A, but if we choose A equal to J times J-transpose, then the x-vector can be interpreted as v_tip, and the ellipsoid is called the manipulability ellipsoid resulting from the unit sphere of joint velocities. If instead we set A equal to the inverse of J times J-transpose, then the x-vector can be interpreted as the end-effector forces f_tip, and the ellipsoid is called the force ellipsoid resulting from a unit sphere of joint forces and torques. This figure shows the manipulability ellipsoid and the force ellipsoid for a 2R robot at a particular configuration. Since the matrix defining the manipulability ellipsoid is just the inverse of the matrix defining the force ellipsoid, the two ellipsoids have the same principal axes, and the lengths of the principal semi-axes are just the reciprocals of each other. In other words, only small forces can be applied in directions where large velocities can be attained, and only small velocities are possible in directions where large forces can be applied. Now that we can visualize the end-effector motion capabilities as a manipulability ellipsoid, we can assign a single number representing how close the robot is to being singular. These numbers are called manipulability measures. The first manipulability measure is the ratio of the longest axis to the shortest axis of the ellipsoid. This measure is lower-bounded by 1, and if it is equal to 1, we say that the manipulability ellipsoid is isotropic; it is equally easy to move in any direction. On the other hand, as the robot approaches a singularity, this number grows large. The second measure is just the square of the first measure, often called the condition number of the matrix A. A final measure is the square root of the product of the eigenvalues of A, which is proportional to the volume of the manipulability ellipsoid. If the manipulability ellipsoid volume becomes large, then the force ellipsoid volume becomes small, and vice-versa. Finally, consider the case that the Jacobian corresponds to the body Jacobian derived in this chapter. The 6-by-n body Jacobian can be split into the 3-by-n angular velocity Jacobian J-b-omega and the 3-by-n linear velocity Jacobian J-b-v. This separation into linear and angular components is useful, because the units of the angular velocity and linear velocity are different. Then for any configuration of the robot, J_b-omega can be used to create angular velocity manipulability ellipsoids and angular moment ellipsoids and J_bv can be used to create linear velocity manipulability ellipsoids and linear force ellipsoids. So, this concludes Chapter 5. You should now have a solid understanding of how to derive and interpret the Jacobian, a fundamental object in robotics that is heavily used in many robot motion planners and controllers. In Chapter 6, we study another key issue in robot motion planning and control: inverse kinematics, or, how to find joint positions that achieve a desired end-effector configuration.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_62_Numerical_Inverse_Kinematics_Part_1_of_2.txt
The forward kinematics maps the joint vector theta to the transformation matrix representing the configuration of the end-effector. For simplicity, we will start instead with a coordinate-based forward kinematics, where f of theta is a minimal set of coordinates describing the end-effector configuration. Then the inverse kinematics problem is to find a joint vector theta_d satisfying x_d minus f of theta_d equals zero, where x_d is the desired end-effector configuration. To solve this problem, we will use the Newton-Raphson numerical root-finding method. In the case that theta and f-of-theta are scalars, the Newton Raphson method can be illustrated easily. Here is a plot of the desired end-effector position x_d minus f-of-theta as a function of theta. The roots of this function correspond to joint values theta that solve the inverse kinematics. In this example, two values of theta solve the inverse kinematics. Now, with the benefit of hindsight, we designate one of the solutions as theta_d. We also make an initial guess at the solution, theta_zero. At that guess, we can calculate the value of x_d minus f-of-theta. Since we know the forward kinematics f-of-theta, we can calculate the slope of x_d minus f of theta. If we extend the slope to where it crosses the theta-axis, we get our new guess theta_1. The change delta-theta in the guess is given by the expression in the figure. If the function x_d minus f were linear, theta_1 would be an exact solution. Since it is not linear in general, theta_1 is only closer to a solution, not an exact solution. Now we can repeat the process, getting a new guess theta_2, and continue until the sequence theta_zero, theta_1, theta_2, etc., converges to the solution theta_d. If our initial guess theta_zero had been to the left of the plateau in the function x_d minus f-of-theta, then the iterative process may have converged to the root on the left. In general, the initial guess should be close to a solution to ensure that the process converges. If the initial guess were near the top of the plateau, the calculated slope would have been small, and the next iteration would be far away, where it may be difficult to converge to a solution. To generalize the Newton-Raphson procedure to vectors of joints and endpoint coordinates, not just scalars, we can write the Taylor expansion of the function f-of-theta around theta_d, as shown here. f-of-theta_d is equal to f-of-theta_i, where theta_i is the current guess at the solution, plus the Jacobian of f evaluated at theta_i times delta-theta, plus higher-order terms. If we ignore the higher-order terms, this simplifies to x_d minus f-of-theta_i equals J-of-theta_i times delta-theta. We can solve for delta-theta as J-inverse times x_d minus f-of-theta_i. Of course, this only works if J is invertible. If J is not invertible, because it is not square or because the robot is at a singularity, we need a different way to calculate delta-theta. Let's rewrite the equation we are trying to solve and number it (1). Instead of premultiplying both sides by J-inverse, we could premultiply by the pseudoinverse of J. The pseudoinverse reduces to the matrix inverse in the case that J is invertible, but it can also be calculated for non-square and singular matrices. The pseudoinverse has the following nice properties: If there exists more than one solution exactly satisfying equation (1), for instance if the robot is redundant, then the pseudoinverse finds a solution vector theta-star that has the smallest length among all solutions. In other words, the change in joint values is as small as possible while still satisfying equation (1). On the other hand, if the robot is at a singularity or if it does not have enough joints to exactly satisfy equation (1), then theta-star calculated by the pseudoinverse is one that minimizes the error in satisfying equation (1). We can now state the Newton-Raphson numerical inverse kinematics algorithm. Starting from an initial guess theta_zero, we calculate the end-effector error e. If it is small enough, then theta_zero is our solution. If not, then we add pseudoinverse of J times the error e to our guess and repeat. We can use this algorithm inside a robot controller. At every timestep, an updated desired end-effector configuration x_d is sent to the controller, and it calculates an appropriate joint vector using Newton-Raphson. The previous joint vector is a good initial guess to the new joint vector, since the updated x_d should be close by the previous x_d. In the next video we adapt this algorithm to the case where the end-effector configuration is represented by a transformation matrix and the Jacobian is the body Jacobian.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_323_Exponential_Coordinates_of_Rotation_Part_1_of_2.txt
Any orientation can be achieved from an initial orientation aligned with the space frame by rotating about some unit axis by a particular angle. We call the unit axis omega-hat and the rotation distance theta. If we multiply these two together, we get the 3-vector omega-hat theta. This is a 3-parameter representation of orientation. We call these 3 parameters the exponential coordinates representing the orientation of one frame relative to another. This is an alternative representation to a rotation matrix. We call these exponential coordinates because of the connection to linear differential equations. In particular, we should view omega-hat as an angular velocity that is followed for theta seconds, and we have to integrate the angular velocity from the initial orientation to find the final orientation. Before solving that problem, let's look at a familiar problem in linear ordinary differential equations in a single variable: x-dot = a times x, where a is a constant. The solution, as you learn in any course on differential equations, is e to the a t times x at time zero, where the exponential function e to the a t is defined by the series expansion shown here. This scalar linear differential equation has an analogous vector linear differential equation, where x is now an n-vector and A is a constant n by n matrix. The solution to this differential equation has the same form as the single-variable case. The term e to the A t is called a matrix exponential. As we'll see in the next video, this equation can be used to integrate an angular velocity, where the matrix A is the 3 by 3 skew-symmetric representation of the angular velocity.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_332_Twists_Part_1_of_2.txt
As we learned in the last video, a transformation matrix T can be used to represent the configuration of the body frame {b} relative to the space frame {s}. Now we need to represent the velocity of the body frame. Just as the time derivative of a rotation matrix was not our representation of angular velocity, the time derivative of a transformation matrix is not our representation of a rigid-body velocity. Let's just jump right to our representation, without deriving it. You can see the details of the derivation in the book. It turns out that any rigid-body velocity, which consists of a linear component and an angular component, is equivalent to the instantaneous velocity about some screw axis. The screw axis is defined by a point q on the axis; a unit vector s in the direction of the axis; and the pitch h of the screw, which is the ratio of the linear speed along the axis to the angular speed about the axis. For now we will assume that the pitch h is finite; later we will return to the case where the pitch is infinite. Given any linear and angular velocity of a body, there is a corresponding screw axis. It's as if the body's instantaneous motion is twisting about the screw axis. The screw axis defines the direction the body is moving, and theta-dot is a scalar indicating how fast the body rotates about the screw. Our representation of a screw axis is not a point q, a unit vector s, and a pitch h, however. Instead, we choose a reference frame, and we define the screw axis S as a 6-vector in that frame's coordinates, consisting of S-omega, the 3-dimensional unit angular velocity when the rotational speed theta-dot is 1, and S_v, the 3-dimensional linear velocity of the origin of the frame when the rotational speed is 1. The linear velocity of the origin, as you see in the figure, is a combination of two terms: h times s, which is the linear velocity due to translation along the screw axis if there is a nonzero pitch, and -s cross q, which is the linear velocity due to rotation about the screw axis. Multiplying our representation of the screw axis S by the scalar rate of rotation theta-dot, we get the twist, a full representation of angular and linear velocity. Let's look at a simple example, where the screw axis is a zero pitch screw, a pure rotation like a turntable. The axis is pointing toward you, out of your screen. This is an animation of a turntable moved by the screw axis. We start rotating about the screw at a rate of theta-dot = 1. Defining a reference frame as shown, we see that the angular velocity S-omega is 1 about the z-axis, which is also out of the screen. Since the reference frame is 2 units from the screw axis, the linear velocity at the frame origin is 2 units in the minus y direction, so we get S_v equal to (0,-2,0). We can choose a reference frame at a different location. In this frame, the angular velocity is the same as before, but S_v is different. Finally, if we choose a reference frame on the screw axis itself, S_v is zero. Because the frame has a different orientation from before, the angular velocity is now 1 unit in the minus y direction. We have been focusing on the case where the screw axis has finite pitch, but there are two cases to consider: the pitch is infinite, or the pitch is finite. If the pitch is infinite, the motion is a pure linear motion with no rotation. In this case, S-omega is zero, S_v is a unit vector, and theta-dot indicates the linear speed. If the pitch is finite, S-omega is a unit vector and theta-dot is the rotational speed in radians per second. If the screw axis S is expressed in coordinates of the body frame {b}, then S-theta-dot is called the body twist V_b. If the screw axis S is expressed in coordinates of the space frame {s}, then S-theta-dot is called the spatial twist V_s. In summary, a twist is a 6-vector consisting of a 3-vector expressing the angular velocity and a 3-vector expressing the linear velocity. Both of these are written in coordinates of the same frame, and the linear velocity refers to the linear velocity of a point at the origin of that frame. Both the body twist and the spatial twist represent the same motion, just in different coordinate frames. The body twist is not affected by the choice of the space frame, and the spatial twist is not affected by the choice of the body frame. In the next video we discuss a matrix representation of twists, which will be used in the matrix exponential for rigid-body motion.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_34_Wrenches.txt
A robot hand is holding this apple in gravity, and the robot is equipped with a force-torque sensor at its wrist. It measures forces and torques in the frame {f}. If we know the mass of the apple, the direction of gravity, and the location of the apple in the hand, what are the forces and torques measured by the sensor? In this final video of Chapter 3 we will develop the representations and transformations needed to answer this question. Here we see two frames, {s} and {b}. A line of force f_b acts at the point r_b, both represented in the {b} frame. f_b is a 3-vector specifying the magnitude of the force in 3 directions. From physics we know that this force induces a 3-vector torque, or moment, about the frame {b} equal to r_b cross f_b. We can package the moment and the force together in a single 6-vector called the wrench, just as we packaged the angular and linear velocity of a rigid body into a twist. Since we know the transform T_sb, we should be able to represent this same wrench in the {s} frame. To derive the relationship between the wrenches F_b and F_s, keep in mind this fact: the dot product of a twist and a wrench is power. Power does not depend on a coordinate frame, and therefore the power must be the same whether the wrench and twist are represented in the {b} frame or in the {s} frame. Using our rule to change the frame of representation of a twist, we can express V_b in terms of T_sb and V_s. Since the transpose of the product of a matrix and a vector is equal to the product of the vector transposed and the matrix transposed, we can rewrite the equation as shown here. Finally, this equation holds for all twists Vs, so it simplifies to the relationship we are looking for, changing the coordinate frame of the wrench from the {b} frame to the {s} frame. Returning to our apple example, we can define a frame {a} at the center of mass of the apple. In this frame, the force due to gravity is mg in the minus y direction, and the moment is zero, since the force vector passes through the origin of the {a} frame. To transform to the force sensor frame, we use T_af, the configuration of the force sensor frame relative to the apple frame, and we see that the wrench F_f has a moment of negative m_gL about the z-axis of the {f} frame. So, this concludes Chapter 3. The material in this chapter is fundamental to representing motion and forces in three-dimensional space, for robots and other types of mechanical systems. We're now equipped with the tools we need to study the kinematics and statics of robots, which begins in Chapter 4.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_135_Mobile_Manipulation.txt
A mobile manipulator consists of a mobile base outfitted with one or more robot arms, such as this omnidirectional base with a 5-joint arm. To control the motion of the end-effector, we can control both the wheels and the arm joints. Since the arm motion is typically more precise than the motion of the base, a common way to control a mobile manipulator is to drive the mobile base to some location, park it, and then use the robot arm for high-precision manipulation. In some cases, though, we need to coordinate the simultaneous control of the wheels and the joints. As an example, if we want to control the full 6-dimensional twist of this robot's end-effector, we have to use the wheels and joints together, since the arm has only 5 degrees of freedom. In this final video of Chapter 13, we focus on coordinated control of the mobile base and the robot arm. In this animation, the robot's end-effector tracks a pre-planned trajectory in SE(3). In the animation below, the robot starts with some error in the configuration of the end-effector, but the feedback controller quickly brings it back to the pre-planned trajectory. At the end of the motion, note that the mobile bases are at different configurations, but the purpose of the controller is only to track the planned end-effector trajectory. Let's watch the desired trajectory and the feedback-controlled trajectory one more time. The final end-effector configurations are identical. To derive this controller, we need the Jacobian mapping the wheel and joint speeds to the twist of the end-effector. To derive this Jacobian, let's first define the frames we'll use. We assume a space frame {s} and 3 frames attached to the mobile manipulator: {b} is the reference frame of the mobile base, {zero} is at the base of the robot arm, and {e} is at the end-effector. It's possible to define the {b} and {zero} frames to be the same, but I'm separating them for generality. The end-effector's configuration in the space frame can be written X of q and theta, or T_se of q and theta. q is the configuration of the mobile base and theta is the arm configuration. This transformation is obtained by multiplying T_sb of q, the constant offset T_b-zero, and T_zero-e of theta, the end-effector frame relative to the {zero} frame. T_sb can be written as a function of the chassis configuration q equals phi, x, y, and z is the constant height of the frame. T_zero-e is determined by the forward kinematics of the arm. With these definitions, the end-effector twist V_e, expressed in the end-effector frame, is the Jacobian J_e of theta times the vector of wheel speeds u and joint velocities theta-dot. There are m wheel speeds and n joint velocities, so J_e is a 6-by-m-plus-n matrix. The J_e matrix can be decomposed into the 6-by-m matrix J_base and the 6-by-n matrix J_arm. J_arm is the same as the body Jacobian J_b, so the only new thing we need to derive is J_base. Note that J_e only depends on the joint configuration theta, not the chassis configuration q, since the end-effector twist expressed in the end-effector frame is independent of the chassis' position and orientation. To derive J_base, we write the planar twist of the chassis, expressed in the chassis frame, as V_b equals F times u, where F is the 3-by-m transformation discussed in earlier videos. By adding rows of zeros above and below, we create the 6-by-m matrix F_6 satisfying V_b6 equals F_6 times u, where V_b6 is the six-dimensional chassis twist expressed in the chassis frame. To express this twist in the end-effector frame, we premultiply it by the Adjoint matrix of T_eb. We can expand this transform to be T_e-zero times T_zero-b, and then write V_b6 as F_6 u. Then J_base is just the Adjoint matrix times F_6. Now that we have the full mobile manipulator Jacobian J_e, we can choose the task-space feedforward plus PI controller from Chapter 11. Remember that X_err is the twist, expressed in the end-effector frame, that takes the actual end-effector configuration to the desired configuration in unit time. Once we've calculated the commanded twist V, expressed in the end-effector frame, the wheel and joint velocities are calculated using the pseudoinverse of J_e. As an example, consider a planar mobile manipulator with a diff-drive mobile base and a robot arm with a single revolute joint. The desired path for the end-effector is a semicircle, and the end-effector has a large initial error. The path is specified by three variables, and the robot has three controls, one for each wheel and one for the arm joint. So the robot in this example is not redundant for the task of trajectory following, as it was for the example at the beginning of this video. The task-space feedforward plus PI controller drives the robot along the path shown here. The end-effector error converges to zero after a small overshoot, due to the choice of the integral gain. We used the same control law for the robot with four wheel speeds and five joint speeds at the beginning of this video. So that's it for Chapter 13, and the book! Congratulations on making it this far! You should now have a firm foundation for the practice of robotics, or for advanced study in robot motion planning, control, and manipulation. Now go do something great.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_111_Control_System_Overview.txt
Every robot has a controller, which continuously reads from sensors like motor encoders, force sensors, or even vision or depth sensors, and updates the actuator commands so as to achieve the desired robot behavior. Examples of control objectives include motion control, as when a robot arm moves along a specified trajectory; force control, where the objective is to apply specific forces to an object or the environment; hybrid motion-force control, as when writing on a board: you control the motion in the plane of the board but the force into the board; and impedance control, as when a robot is used to render a virtual environment. In this case, the user grabs the end-effector of the robot and moves it around to explore objects in a virtual world, which could be displayed to the user as masses, springs, or dampers. If the robot is a robot arm driven by electric motors, this is a typical electromechanical block diagram. A power supply takes AC power from the wall and delivers DC power to motor amplifiers. The controller takes as input a desired motion from the user and sensor feedback from the robot. At a rate of perhaps a thousand times per second, the controller evaluates a control law and requests joint torques from each motor amplifier. At each joint, the amplifier sends a current to the motor to achieve the desired torque, since the torque of an electric motor is proportional to the current. Typically a current sensor senses the actual current, and the amplifier updates its signal to better achieve the current needed to generate the desired torque. These inner control loops can run tens of thousands of times per second. Some robot joints have torque sensors embedded in the actuators themselves, and this feedback is used in the local torque control loop. Finally, the motors are coupled to each other through the dynamics of the arm, and the actual motion of the robot is measured by the encoders. The measured motion is sent to the controller. This is a block diagram of the robot control system. The controller produces low-power signals telling the amplifiers what to do; the amplifiers send high-power current through the motors, which produce the forces and torques that drive the robot. The robot's motion and forces are measured by sensors that send the measurements back to the controller. We call this closed-loop control because of the sensor feedback. It's also common to model force disturbances and sensor errors as being inserted into the control loop. In this chapter, though, we will simplify our analysis by assuming that the amplifiers and actuators work perfectly to generate the control forces requested by the controller and that the sensors measure the robot's performance perfectly. We also ignore the fact that the controller is typically implemented at a finite frequency and instead assume that control laws are implemented in continuous time. Then our block diagram can be simplified to this block diagram, consisting of only the controller and the dynamics blocks. Chapter 8 covered the dynamics of a robot. In this chapter, we will derive the control laws that drive a robot. We begin this process in the next video by introducing the notion of error dynamics.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_232_Configuration_Space_Representation.txt
To represent a C-space using real numbers, we have to make some arbitrary choices. For example, to represent points on a plane, we choose a point in the space as the origin, and two orthogonal coordinate axes. With that choice, we can represent any point as a list of two coordinates, x-y. Of course our representation of the space does not change the underlying space itself. Therefore, the topology of the space is independent of our representation of the space. If the space is "flat," like a line, a plane, or more generally an n-dimensional Euclidean space, we typically choose an origin and coordinate axes and then use coordinates to represent a point. This is what we are most familiar with. A velocity is then just the time derivative of those coordinates. If the space is curved, however, like a sphere, we have two ways we could represent it: we could either use an EXPLICIT PARAMETRIZATION, which uses a minimum number of coordinates to represent the space, such as latitude and longitude for a sphere. Or we could use an IMPLICIT REPRESENTATION, which uses more coordinates, subject to constraints. An implicit representation views the n-dimensional space as embedded in a higher-dimensional Euclidean space. In the sphere example, we view the two-dimensional surface as embedded in a three-dimensional Euclidean space, and we use three Euclidean coordinates, x-y-z , subject to a single constant radius constraint. As we learned before, one constraint on three coordinates implies two degrees of freedom, that is, a two-dimensional C-space. So how do we choose between explicit and implicit representations? An advantage of the explicit parametrization is the simplicity of a minimum number of coordinates. A disadvantage is that, because the topology of the space is different from a Euclidean space, the representation will have poor behavior at some points of the space. For example, if you walk at a constant speed along a constant latitude near the equator, your longitude changes slowly. If you do it near the North Pole, however, your longitude changes very quickly, with no upper bound as you get closer to the North Pole. The North Pole is called a SINGULARITY of the representation. Also, the moment you step over the North Pole, your longitude changes by 180 degrees. The rapidly changing coordinates and discontinuities at certain points in the space are not great properties of a representation. Keep in mind that this has nothing to do with the topology of the sphere: the sphere looks the same everywhere, at the North Pole or on the equator. It is only an issue with our representation of the sphere. With the implicit representation using x-y-z coordinates subject to one constraint, there are no problems anywhere with discontinuities or rapidly changing coordinates. The disadvantage is the somewhat greater complexity of the representation. Throughout this book we use implicit representations, particularly for the curved, non-Euclidean space of orientations of a rigid body. The singularity-free implicit representation we use is called the ROTATION MATRIX. In summary, we typically do not represent configurations using a minimum set of coordinates, and we typically do not represent velocities as the time rate of change of coordinates. In the next video, we'll learn about two types of constraints on the motion of a robot: configuration constraints and velocity constraints.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_1224_Duality_of_Force_and_Motion_Freedoms.txt
To analyze a manipulation task involving contacts, we combine our modeling of contact kinematics with the Coulomb friction model. At any instant, a contact could be breaking, sliding, or rolling, and each of these cases implies different constraints on the feasible motions and forces at the contact. Importantly, though, the total number of constraints on the motions and forces is the same, regardless of the contact label. For simplicity let's assume first-order dynamics dealing with forces and velocities, but similar arguments apply when we consider second-order dynamics with forces and accelerations. As an example, consider a single contact between a stationary finger and a rectangular object that moves in a plane. The friction cone is indicated in yellow. If we assume the contact is breaking, then there are no equality constraints on the velocity of the rectangle, and there are 2 equality constraints on the force applied by the finger, namely, that the force is zero. Let's begin to construct a table of our observations. For a breaking contact B, the velocity at the contact point on the moving body can be any linear velocity in a 2-dimensional set. In other words, there are no equality constraints on the linear velocity of the point. Now considering the forces at the contact, there are 2 equality constraints, namely that the force in the normal and tangential directions must be zero, so there are zero force freedoms. If the contact is sliding, then the contact force is constrained to be somewhere on the edge of the friction cone resisting sliding. Referring again to the table, there is 1 constraint on the velocity, that the normal velocity at the contact is zero, and 1 freedom to choose the magnitude of the sliding velocity. Similarly, there is 1 constraint on the contact force, that the angle of the force must be on an edge of the friction cone, and 1 freedom in choosing the magnitude of the friction force. If the contact is a rolling contact, then the instantaneous relative velocity at the contact is zero, and the contact force can be anywhere inside the friction cone. This cone is a 2-dimensional space of force vectors with bases at the contact point and tips somewhere inside the shaded region. Referring again to our table, the zero relative velocity at the contact means 2 constraints and zero freedoms for the relative velocity. The contact force has zero equality constraints and 2 freedoms. So the full table for planar contacts looks like this. Notice that when we solve for the forces and velocities of rigid bodies in contact, the total number of equality constraints on motion and force is 2 for each contact label. If the contacts are in 3-dimensional space instead of a plane, each contact label provides 3 total constraints when we solve for the velocities and forces, and the full table looks like this. Breaking contacts provide the fewest constraints on velocity and the most constraints on forces, while rolling contacts provide the most constraints on velocity and the fewest constraints on forces.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_321_Rotation_Matrices_Part_1_of_2.txt
We begin our study of the representation of the configuration of a rigid body by focusing on orientation only. The approach to representing the full configuration of a rigid body is analogous. Consider two frames, a space frame {s} and a body frame {b}. They are shown at different locations, but we are focusing on their orientations. We can express the orientation of the frame {b} relative to {s} by writing the unit coordinate axes of frame {b} in the coordinates of frame {s}. In the coordinates of {s}, the x_b-axis is 0, 1, 0, the y_b-axis is -1, 0, 0, and the z_b-axis is 0, 0, 1. We can write these column vectors side by side to form the rotation matrix R_sb. The second subscript, {b}, indicates the frame whose orientation is being represented, and the first subscript, {s}, is the frame of reference. Sometimes the two subscripts are implicit and we leave them out, writing the rotation matrix simply as R. As we learned in Chapter 2, the space of orientations of a rigid body is only 3 dimensional, but we have 9 numbers in a rotation matrix. That means the 9 entries of the matrix must be subject to 6 constraints. Three of those constraints are that the column vectors are all unit vectors, and the other 3 are that the dot product of any two of the column vectors is zero. In other words, the 3 vectors are orthogonal to each other. These 6 constraints can be written compactly as R transpose times R is equal to the 3 by 3 identity matrix I. These constraints ensure that the determinant of R is either 1, corresponding to right-handed frames, or -1, corresponding to left-handed frames. We only use right-handed frames, so the determinant of R must be 1. The set of all rotation matrices is called the special orthogonal group SO(3): the set of all 3x3 real matrices R such that R transpose R is equal to the identity matrix and the determinant of R is equal to 1. Rotation matrices satisfy the following properties: The inverse of R is equal to its transpose, which is also a rotation matrix. The matrix product of two rotation matrices is also a rotation matrix. Matrix multiplication is associative, but in general it is not commutative. Finally, for any 3-vector x, R times x has the same length as x. As we will see later, this means that rotating a vector does not change its length. In the next video, we will study 3 common uses of rotation matrices.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_134_Odometry.txt
Odometry is the process of estimating the chassis configuration from wheel motions, essentially integrating the effect of the wheel velocities. Since wheel-rotation sensing is available on all mobile robots, odometry is convenient. Odometry errors tend to accumulate over time, though, due to slipping or skidding of the wheels and numerical integration error. For this reason, odometry is usually supplemented with estimation techniques using exteroceptive sensors, like cameras, range sensors, and GPS. This video focuses on odometry. In this video, the 6-dimensional twist of the chassis, expressed in the chassis frame {b}, is written V_b6. Since the chassis motion is planar, three of the components are zero. The remaining three velocities in the plane are collectively written as V_b, a planar twist. With this as background, the odometry process consists of the following steps. First, we measure the wheel rotations Delta theta since the last timestep, typically using encoders. Next, we assume that the wheel velocities were constant since the last encoder readings at time Delta t in the past. Because it doesn't matter which units we use to measure time, we can assume Delta t is equal to 1 unit, and therefore theta-dot is equal to Delta theta. Next, we find the chassis planar twist V_b that corresponds to Delta theta. Next, we use the matrix exponential to integrate the corresponding 6-dimensional twist V_b6 for time Delta t equal to 1 to find the configuration of the chassis frame at step k-plus-1 relative to the chassis frame at step k. Finally we express the new chassis frame relative to the space frame. Since the other steps are straightforward, let's focus on the third step, finding the matrix F relating Delta theta to V_b. For a car or a diff-drive, the chassis frame {b} is midway between the wheels. The radius of the wheels is r, and positive rotation of the wheels corresponds to forward motion at those wheels. A little geometry shows that the planar twist provided by unit velocity of the left wheel has minus-r over 2d in angular velocity and r over 2 in the x_b direction. The planar twist for a unit velocity of the right wheel is similar, but with an opposite angular component. The 3-by-2 F matrix relates the wheel increments to the chassis twist V_b. For omnidirectional mobile robots, the wheel speeds theta-dot are related to the chassis twist V_b by the matrix H of zero, as we learned in a previous video. We can invert this relationship to get V_b equals H-pseudoinverse times theta-dot, which we can write equivalently as F theta-dot or F Delta theta. For the three omniwheel robot, inverting the H matrix we found in an earlier video produces this 3-by-3 F matrix relating Delta theta to V_b. For the four mecanum wheel robot, pseudo-inverting the H matrix produces this 3-by-4 matrix. With the F matrix and the wheel increment Delta theta, we get the planar chassis twist V_b. Expressing this as the 6-dimensional twist V_b6, we can use our standard matrix exponential to calculate the configuration of the chassis frame at step k-plus-1 relative to the configuration at step k. To express this in the space frame, we premultiply the matrix exponential by the transformation matrix representing the chassis frame at step k relative to the space frame. We can then extract the coordinates q_k-plus-1 from T_{s b_k-plus-1}. Equivalently, we can convert the matrix exponential to an increment of coordinates expressed in the chassis frame, rotate the x and y components to an increment in the space frame, and add the increment to the previous chassis configuration. In the final video of this chapter, we combine a mobile robot with a manipulator and consider their coordinated control.
Modern_Robotics_All_Videos
Modern_Robotics_Chapters_91_and_92_PointtoPoint_Trajectories_Part_2_of_2.txt
In the previous video, we learned that a trajectory can be represented as theta-of-s-of-t. We also found expressions for simple paths theta-of-s. In this video, we study time scalings s-of-t that turn a path into a trajectory. One simple time scaling is a third-order polynomial time scaling, where s is a cubic function of time. The time scaling is defined by the four coefficients of time, a_zero through a_three. The time derivative of the time scaling is shown here. To solve for the coefficients, we apply the four terminal constraints, which say that s is zero at time zero and one at time capital T, and that s-dot is zero at times zero and capital T for motions that begin and end at rest. Solving for the four coefficients using these four constraints, we get these values. Now we can plot s as a function of t, as well as s-dot and s-double-dot. Notice that s is a cubic, s-dot is a parabola, and s-double-dot is a line. s-dot begins and ends at zero, but s-double-dot jumps discontinuously to six over capital-T-squared at time zero. If we would prefer a smoother motion, where the acceleration at the beginning and end of the motion are zero, we can use a fifth-order polynomial time scaling. A fifth-order polynomial gives us two more coefficients to choose, and we use them to satisfy two more terminal constraints, that the acceleration is zero at times zero and capital T. Now s-double-dot is a cubic, allowing s-double-dot to be zero at the beginning and end of the motion. Another popular time scaling in motion control is the trapezoidal time scaling, named for its s-dot plot, shown here. First the robot follows a constant acceleration s-double-dot, then it coasts at a constant s-dot, then it follows a constant deceleration to rest. Like the third-order polynomial time scaling, this time scaling has discontinuous jumps in the acceleration. If this is undesirable, we can use an S-curve time scaling, shown here as an s-dot plot. An S-curve has seven segments. In the first segment, the robot follows a constant jerk. Jerk is the time derivative of acceleration. Then it follows a constant acceleration, followed by a constant negative jerk, followed by a coasting period at constant velocity. Then the robot slows down, symmetrically to the first three segments. The acceleration is zero at the beginning and end of the motion. Of course, the actual speed and acceleration of the robot at any time depends on the distance of the path and the total duration of the motion capital T, not just the form of the time scaling. In the next video we will see how to control the shape of a robot's path by having it pass through a set of timed via points.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_86_Dynamics_in_the_Task_Space.txt
Until now, we have been focusing on robot dynamics expressed in the space of joint motions and joint forces and torques. We could equivalently formulate the dynamics in the task space, that is, the space of end-effector motions and end-effector wrenches. We assume that the end-effector twist V equals the Jacobian times theta-dot, where the twist and Jacobian can either be in the space frame or the end-effector frame. If the Jacobian J is invertible, then V-dot equals J theta-double-dot plus J-dot theta-dot, and we can solve the equations for V and V-dot to find theta-dot and theta-double-dot. Plugging these into the joint-space dynamic equation, we get the task-space dynamics: the end-effector wrench is equal to Lambda of theta times V-dot plus eta of theta and V, where Lambda of theta is the robot's mass matrix expressed in the task space and eta of theta and V is the sum of the velocity-product and gravity terms expressed as an end-effector wrench. Each of Lambda and eta is expressed in terms of the joint positions theta, not the end-effector configuration X, since generally there could be more than one robot configuration for a given end-effector configuration. If the end-effector applies a wrench F_tip, it is simply added to the total wrench.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_231_Configuration_Space_Topology.txt
In addition to the number of degrees of freedom, another important property of a configuration space is its shape, or topology. Consider a plane and the surface of a sphere, for example. Both of these spaces have two dimensions, but their shape is quite different: the sphere wraps around in a way that the plane does not. This difference in shape impacts the way we use coordinates to represent the space. We say two spaces have the same "shape," or more formally that they are TOPOLOGICALLY EQUIVALENT, if one can be smoothly deformed into the other, without cutting or gluing. A classic example is shown in this video, where the surface of a doughnut, also called a torus, is smoothly deformed into the surface of a coffee mug. These are both two-dimensional spaces. They cannot be deformed into a plane, however: that would require cutting. So the mug and the torus are topologically equivalent, but they are not equivalent to a plane. The topology of a space is a fundamental property, and it is not affected by our choice of how to represent the space with coordinates. Some topologically distinct one-dimensional spaces are the circle, the line, and a closed interval of the line. Topologically distinct two-dimensional spaces include the plane, the surface of a sphere, the surface of a torus, and the surface of a cylinder. Let's look at some examples of physical systems with two-dimensional C-spaces. The first is a point moving in a plane. The topology of the C-space is just a two-dimensional Euclidean space, and a configuration can be represented by two real numbers. A spherical pendulum pivots about the center of the sphere, and the topology of the C-space is the two-dimensional surface of a sphere. A configuration can be represented by latitude and longitude. The C-space of a 2R robot is a torus, and a configuration can be represented by two coordinates ranging from zero to 2 pi. And finally, the C-space of a rotating sliding knob is a cylinder, and a configuration can be represented by one real number, representing the sliding distance, and one angle between zero and 2 pi. The topology of each C-space, as you see in the middle column, does not depend on how we decide to represent the space using coordinates, whereas the representation in coordinates depends on an arbitrary choice, such as where we define the zero angle for each joint of the 2R robot. Let's focus on the 2R robot. The topology of the C-space is a torus. We can represent the torus using the two joint angle coordinates, ranging between 0 and 2 pi. The space of coordinates is obtained from the torus by cutting the torus once to get a cylinder, then again to get a square subset of the plane. Because of this cutting, which means that the square and the torus do not have the same topology, even if the configuration on the torus moves smoothly, the coordinate representation changes discontinuously at 0 and 2 pi. In this video, you can see that as the robot moves, the coordinate representation jumps suddenly from one edge of the coordinate square to the other. Now let's focus on the rotating and sliding knob. Its C-space is a cylinder, due to one linear joint and one rotational joint. We can cut this cylinder once to get our coordinate representation, a flat subset of the two-dimensional plane. The angle coordinate is discontinuous at 0 and 2 pi. As the robot moves in this video, you see the discontinuity in the representation of the knob angle. Finally, let's look at the spherical pendulum. It has a spherical C-space, and we can see its representation as a subset of the plane. Each of the points on the top line segment of the representation correspond to the same point, the North Pole of the sphere, and each of the points on the bottom line segment correspond to the South Pole. This video shows the changing representation as the spherical pendulum moves. In summary, C-spaces of the same dimension can have different topologies. In the next video, we discuss different ways to represent C-spaces that are not flat Euclidean spaces.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_332_Twists_Part_2_of_2.txt
In the last video, we learned that rigid-body velocities can be represented as a 6-vector twist. The twist can be represented in any arbitrary frame; for example, the twist could be represented as V_a in frame {a} or as V_b in frame {b}. If we want to change the frame of representation of a twist, it is tempting to try a subscript cancellation rule, V_a equals T_ab times V_b but this doesn't work due to dimension mismatch: transformation matrices are 4 by 4 but twists are 6-vectors. It is apparent that we need to premultiply V-b by a 6 by 6 matrix. The 6 by 6 matrix we need is called the adjoint representation of a transformation matrix, and it is defined as you see here. Now we can apply a modified version of our subscript cancellation rule to change the frame of representation of a twist. By analogy to the matrix representation of angular velocity, we would like to find a matrix representation for twists. Recall that, for angular velocities, we had the 3 by 3 skew-symmetric matrix representations of angular velocities bracket omega_b equals R-inverse times R-dot and bracket omega_s equals R-dot times R-inverse Similarly, if T represents the body frame {b} in the space frame {s}, we have 4 by 4 matrix representations of the twists bracket V_b equals T inverse times T-dot and bracket V_s equals T-dot times T-inverse where little se(3) is the space of 4 by 4 matrix representations of twists. Little se(3) gets its name from its relationship with big SE(3). The top left 3 by 3 submatrix is the skew-symmetric matrix representation of the angular velocity, as we've seen before, and the top right 3 by 1 vector is the linear velocity of a point at the origin of the frame, expressed in that frame. The bottom row is 4 zeros. Notice that we are overloading the bracket notation. In one case it means the matrix representation of an angular velocity. In this case it means the matrix representation of a twist. These matrix representations will be used in the next video when we develop the matrix exponential and log for rigid-body motions, analogous to the matrix exponential and log for rotations that we've already seen.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_113_Motion_Control_with_Velocity_Inputs_Part_1_of_3.txt
When we model a robot, we usually assume that we have control of the forces and torques at the joints, and the resulting motion of the robot is determined by its dynamics. This is the model we will use starting in Chapter 11.4. It's simpler, however, and occasionally even more appropriate, to ignore the dynamics and assume that we have direct control of the joint velocities. This assumption might make sense if we trust local joint controllers to achieve the velocities we request. Also, for wheeled mobile robots it's common that a higher-level control system commands velocities of the wheels or chassis, letting a lower-level control system achieve those velocities. In Chapter 11.3, we study robot control when the controller directly commands velocities, not forces or torques. We'll start with a robot with a single joint, since the ideas generalize easily. The first idea is to use open-loop control. Since we know the desired joint velocity at any instant, our controller could simply command this desired velocity at all times. This is called open-loop control, or feedforward control, because there is no sensing of the actual joint position to close a feedback loop. If there is ever any error in the joint position, however, this open-loop approach cannot recover. Essentially all robot controllers employ feedback, and the simplest closed-loop controller commands a joint velocity equal to a gain K_p times the error theta_e. The gain K_p is called a proportional gain, since the control theta-dot is proportional to the error. This type of control is called proportional control, or P control for short. The gain K_p should be positive to ensure stability. For example, if the goal configuration is 1 radian and the actual configuration is zero, the error is positive, and a positive gain K_p would command a positive velocity of the joint, pulling the joint to the goal configuration. If the gain K_p were negative, the joint would move away from the goal configuration with increasing velocity the further it is from the goal. Let's take a look at the case where the desired velocity is zero. This is called setpoint control, because we are controlling the joint to a constant value. Then the rate of change of the error is just the negative of the joint velocity. Plugging in the P controller theta-dot equals K_p theta_e, we get this differential equation in theta_e. This can be written in our standard first-order form with a time constant of 1 over K_p. The unit step error response is shown here. The larger K_p, the faster the error converges to zero. In practice, there are limits on how large we can choose K_p. With a large K_p, the joint might have excessive vibration, as small position errors produce large velocities. Also, actuators have limited maximum velocity, and if the control law is often hitting those limits, then the response of the controller is no longer well modeled by our simple linear differential equation. Now assume the desired trajectory has a constant velocity. Then the rate of change of the error can be expressed as theta_d-dot minus theta-dot, and plugging in c for theta_d-dot and the P controller for theta-dot, we get this first-order nonhomogeneous differential equation. The dynamics are stable for a positive K_p, but the solution to the differential equation shows us that as t goes to infinity, the steady-state error is c over K_p, not zero. Although this error can be made small by choosing K_p large, as we just discussed, there are limits as to how large we can reasonably choose K_p. The key limitation is that the P controller needs error to command a nonzero velocity. So, while proportional control can eliminate all error when stabilizing a setpoint, it cannot eliminate all error when the desired motion has a nonzero velocity. In the next video, we will introduce another feedback controller, called a proportional-integral controller, to address this issue.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_1216_Planar_Graphical_Methods_Part_2_of_2.txt
In the previous video we saw that a cone of planar twists can be represented as a region of centers of rotation. In this video, we'll learn a simple rotation center representation for the feasible twist cone of a planar body subject to multiple stationary contacts. This figure shows a stationary triangle contacting a planar body. A center of rotation at the contact point, whether it is clockwise or counterclockwise, causes rolling at the contact. Therefore, we label this rotation center R. Rotation centers to the left of the contact normal are feasible if they have a plus sign, for counterclockwise rotation. Feasible rotation centers to the right of the contact normal have a minus sign. All of these rotation centers are labeled B, for breaking contact. An example is this rotation center labeled minus. Rotation about this center causes breaking contact. Finally, rotation centers on the contact normal are labeled S, for sliding. For planar problems, sliding contacts can be further classified as left-sliding, where the body slides left relative to the constraint, or right-sliding, where the body slides right relative to the constraint. With this distinction, we can refine the labels of rotation centers on the contact normal to be Sl, where the body slides left relative to the constraint, or Sr, where the body slides right relative to the constraint. For example, this positive rotation center causes the body to slide to the right relative to the triangular constraint, while this positive rotation center causes the body to slide to the left relative to the constraint. Putting everything together, we get this picture of the twists that are feasible when there is a single contact. Feasible rotation centers to the left of the contact normal have a plus label and cause breaking contact. Feasible rotation centers to the right of the contact normal have a minus label and cause breaking contact. Rotation centers at the contact location cause rolling. Finally, rotation centers along the contact normal line, but not at the contact, cause sliding. Positive rotation centers above the contact and negative rotation centers below the contact cause right sliding, and negative rotation centers above the contact and positive rotation centers below the contact cause left sliding. Now consider a body with three points of contact. Contacts 1 and 2 are with a table, and contact 3 is a robot finger. Contact 1 allows the twists shown here: plus or minus for rotation centers along the normal, plus for rotation centers to the left of the normal, and minus for rotation centers to the right of the normal. The rotation centers that satisfy contact 2 are intersected with those for contact 1, yielding this smaller set of rotation centers. Finally, the third contact reduces the set of feasible rotation centers even further. This set of rotation centers is a graphical representation of the feasible twist cone for the three contacts. We could also write the contact mode for each group of rotation centers. The contact mode has 3 labels, one for each of the three contacts. Rotation about any of the rotation centers labeled BBB causes breaking at all three contacts. Let's focus on one particular positive rotation center and illustrate it on the body. According to the contact mode, this rotation center causes slipping to the right at contacts 1 and 3 and breaking contact at contact 2. If we set the body in motion, though, we see that it immediately penetrates the finger. So in fact this rotation center is not possible because of contact 3. Our prediction was wrong because our first-order analysis considers only the contact normal, not the full details of the local contact geometry. In general, if a first-order kinematic analysis concludes that a twist causes breaking or penetrating of a contact, then so will a higher-order analysis. But if a first-order analysis indicates rolling or sliding, a higher-order analysis may change the conclusion. In the next video we conclude our purely kinematic analysis of contact by studying form closure, which occurs when the contacts completely immobilize the body.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_53_Singularities.txt
We've seen two major uses of Jacobian matrices: converting a set of joint velocities theta-dot to an end-effector twist V and converting an end-effector wrench F to a set of joint forces and torques tau. The twists and wrenches can be expressed in the space frame {s} or the end-effector frame {b}. The Jacobian is a 6 by n matrix, where n is the number of joints. This means that the rank of the Jacobian can be no greater than the minimum of 6 and n. We say that the Jacobian is full rank at a configuration theta if the rank is equal to the minimum of 6 and n. We say that the Jacobian is singular at a configuration theta-star if the rank of the Jacobian at theta-star is less than the maximum rank the Jacobian can achieve at some configuration. At a singular configuration, the robot loses the ability to move in one or more directions. We can also categorize Jacobians according to the number of joints n. If n is less than 6, the Jacobian is "tall," meaning it has more rows than columns. The set of reachable configurations for the end-effector is less than 6-dimensional, so we call such robots kinematically deficient. This does not mean the robot is not useful, it just means it is not capable of general motion at the end-effector. An example robot is the 4-joint RRRP robot shown here, which has a 6-by-4 Jacobian. If n equals 6, the Jacobian is a 6-by-6 square matrix, as for this 6R robot. Such robots are often called general purpose manipulators, because they are capable of general 6-dimensional rigid-body motion at their end-effectors. If n is greater than 6, the Jacobian is "fat," meaning it has more columns than rows. An example of such a robot is the 7R robot pictured here, which has a 6-by-7 Jacobian. Such robots are called redundant, because they can achieve the same end-effector twist with different joint velocities. This capability can be useful in a number of circumstances, allowing internal motion of the arm that is not visible in motion at the end-effector. Your own arm has a redundancy like this: keeping your hand stationary at a fixed configuration in space, you can still move your arm internally. It can be difficult to visualize 6-dimensional motion of a robot, so to illustrate the shape and rank properties of the Jacobian, we will use a simple planar example. In this example, the end-effector velocity v_tip and force f_tip are 2-vectors, and the Jacobian is 2 by n, where n is the number of joints. For the 3_R arm shown here, the number of joints n is 3, the robot is redundant, and its 2-by-3 Jacobian matrix is full rank, meaning its rank is 2, at the configuration shown. Since the Jacobian is rank 2, the robot can generate any linear velocity at its end-effector, and any force applied to the end-effector must be actively resisted by at least one of the joints. Using the fact that v_tip equals J theta-dot, we can always calculate v_tip given the joint velocities theta-dot. This figure shows the components of the endpoint velocity caused by the individual joint velocities, and we can sum them to get the end-effector velocity v_tip. Since the rank of J is 2, any v_tip can be created by the joints. You could imagine asking the inverse question, given v_tip, what is theta-dot? The answer to this question is not as straightforward, however, because in general, as in this case, the inverse of J does not exist, either because J is not square or because it is singular. Because this 3R robot is redundant, it turns out that for any v_tip, there is a full one-dimensional set of solutions of joint velocities that achieves v_tip. This inverse question will be addressed in more detail in Chapter 6. Moving on to forces, using the fact that tau equals J-transpose times f_tip, we can always find the joint forces and torques tau that correspond to the end-effector force f_tip. For the f_tip shown here, we can graphically calculate tau_1, the torque about the first joint, using the relationship tau_1 equals minus r_1 times the magnitude of f_tip, where r_1 is the vector perpendicular to f_tip from the joint to the line of force. Similarly, we can calculate the torques at joints 2 and 3. Each joint has to individually support the endpoint force f_tip. You could also imagine asking the inverse question given tau, what is the endpoint force f_tip, but this question is not as straightforward, because the inverse of J-transpose may not exist. For the 3R arm, for most random choices of joint torques, the arm will have internal motion, and will not simply statically resist an externally applied force minus f_tip. Moving on, let's consider the redundant 3_R arm when it is fully stretched out. The rank of the 2-by-3 Jacobian drops to 1, meaning the arm is at a singular configuration. Rotation at joint 1, 2, and 3 produces only vertical velocity at the end-effector; no horizontal velocity can be achieved. Also because of the singularity, a horizontal force applied at the end-effector is resisted by the mechanical structure of the robot; no joint torques have to be applied. This 2_R robot has a square Jacobian that has rank equal to 2 at the configuration shown. This means that any tip velocity is possible and any force applied to the tip must be actively resisted by the joints. In this picture, the 2_R robot is at a singular configuration, where only vertical velocities are possible and horizontal forces can be passively resisted by the mechanical structure of the robot. Finally, we have a 1_R robot. The Jacobian is 2-by-1 and is full rank, meaning the rank is equal to 1, at any configuration. This robot is kinematically deficient for the task of achieving arbitrary linear velocities at the tip, as it can only achieve linear velocities perpendicular to the link. Any horizontal force is passively resisted by the joint, while any vertical force must be actively resisted by the joint torque. In the next and final video of Chapter 5, we will characterize how close a robot is to being singular using the manipulability ellipsoid touched on in the first video of this chapter.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_24_Configuration_and_Velocity_Constraints.txt
For robots with links and joints forming closed loops, it is often easier to find an implicit representation of the C-space rather than an explicit parametrization. Consider the 4-bar closed chain shown here. Grubler's formula tells us the 4-bar has one degree of freedom, so it should be possible to parametrize the C-space by a single variable. This representation may be hard to derive and may have subtle singularities, so instead we could view the C-space as a 1-dimensional space embedded in the 4-dimensional space of joint angles, defined by the three loop-closure equations that say the final position and orientation, after going around the loop, must be equal to the initial position and orientation. If we define the vector of joint angles theta, we can rewrite the loop-closure equations in this vector form. These constraints are called holonomic constraints, constraints that reduce the dimension of the C-space. If the robot's configuration is defined by n variables subject to k independent holonomic constraints, then the dimension of the C-space, and the number of degrees of freedom, is n minus k. If the robot is moving, we could ask how these holonomic constraints restrict the velocity of the robot. Since g of theta has to be zero at all times, the time rate of change of g must also be zero at all times. We can write these constraints as a matrix dependent on the configuration theta times the joint velocities theta-dot equal to zero. If we call this matrix A of theta, we can write the velocity constraints as A of theta times theta-dot equals zero, where the A matrix has k rows and n columns. Velocity constraints like this are called Pfaffian constraints. Sometimes we call holonomic constraints "integrable" constraints, since they are essentially the integral of these velocity constraints. In some cases, though, a set of velocity constraints cannot be integrated to equivalent configuration constraints. Consider the chassis of a car driving on a plane. If we define an x-y reference frame, we can represent the configuration of the chassis as q = (phi, x, y), where phi is the chassis angle, and (x,y) refers to the location of a point halfway between the rear wheels. If the forward velocity of the car is v, the x-y velocity is x-dot = v cos phi and y-dot = v sin phi. We can express v as y-dot divided by sin phi and substitute this into our equation for x-dot to get the velocity constraint x-dot times sine phi minus y-dot times cosine phi equals zero. We can write this as a Pfaffian constraint A of q times q-dot equals zero where the single row of the 1 by 3 A matrix is 0, sine of phi, and minus cosine of phi. Unlike a holonomic constraint, this velocity constraint cannot be integrated to give an equivalent configuration constraint. Therefore we call this a nonholonomic constraint. A nonholonomic constraint reduces the space of possible velocities of the car -- the car cannot slide directly to the side -- but it does not reduce the space of configurations. Sideways motion can be achieved by parallel parking, and the car can reach any configuration in the 3-dimensional C-space. A robot can be subject to both holonomic and nonholonomic constraints. Again using the car as an example, if we consider the chassis to be a rigid body in space, then three holonomic constraints keep the chassis confined to the plane, while one nonholonomic constraint prevents sideways sliding. To summarize, holonomic constraints are constraints on configuration, nonholonomic constraints are constraints on velocity, and Pfaffian constraints take the form A of theta times theta-dot equals zero. Determining whether Pfaffian constraints are actually holonomic configuration constraints or only nonholonomic velocity constraints is left to Chapter 13. In the next video we wrap up Chapter 2 by introducing the task space and the workspace.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_1023_Graphs_and_Trees.txt
Although the C-space of a robot is a continuous space, in motion planning we typically discretize it in some way. For example, this image shows a mobile robot in a maze. We could represent the free space of this maze by sampling some free configurations and drawing lines between configurations that can reach each other by a straight-line path. This graph is now our discretized representation of the free space. A graph consists of a set of nodes and a set of edges between them. Consider, for example, this graph consisting of 5 nodes, lettered a through e. Drawn this way, each edge can be followed in either direction, so we call this an unweighted undirected graph. It is undirected because each edge can be followed in either direction. It is unweighted because the cost of traversing any edge is the same. For a weighted graph, however, different edges have different costs. For example, the cost associated with an edge may be the length of the path corresponding to the edge, or the amount of energy or time it takes to traverse it. In this example, it is much cheaper to go from node b to node a than it is to go from node b to node c. Edges can also be directed, as you see in this weighted directed graph. Here, it is possible to go back and forth between nodes a and b, but it's less costly to go to a than it is to go to b. Also, we can see that it is possible to go from node c to node e, but it is not possible to go from node e to node c. A directed graph is often called a "digraph" for short. Finally, we define a tree to be a specific type of directed graph, as shown here. A tree has one root node and all other nodes have 1 parent, meaning they can be reached by a single edge from only one other node. A tree has no cycles. Any node with no children is called a leaf of the tree. A tree can be weighted or unweighted. In the coming videos, we will see examples of graphs and trees in motion planning. In preparation for that, in the next video I will describe the A-star search algorithm for finding optimal paths on a graph.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_1221_Friction.txt
Forces transmitted through a contact can include both normal forces as well as tangential forces, due to friction. To understand the friction force, imagine pulling a block with a spring. To cancel the downward gravitational force on the block, the floor pushes upward with a normal force f_n. The force applied by the spring is f, which is opposed by the friction force f_t applied by the floor. At first, the pulling force f is too small to move the block. As we extend the spring, the force f gets larger, as does the resisting friction force f_t. When f grows large enough, the block begins to slide. If the spring is pulled with a constant velocity, the block matches the velocity, and f and f_t are equal and opposite. Let's plot the friction force f_t as a function of the block's sliding velocity v. When the velocity is zero, the friction force f_t could be anywhere in the range minus mu f_n to mu f_n, where mu is called the friction coefficient. When the sliding velocity is not zero, the magnitude of the friction force is mu f_n, and it acts in the direction opposite the sliding velocity. By this model, the friction force depends only on the direction of sliding, not the speed of sliding. This empirical, approximate model of dry friction is called Coulomb friction. According to this approximate model, if the sliding velocity is zero, then the magnitude of the tangential friction force is less than or equal to mu times the normal force, which is nonnegative. The frictional force could act in any direction. If the sliding velocity is nonzero, then the friction force magnitude is mu f_n, and it acts in the direction opposite the sliding direction. If the velocity is zero, but the acceleration a is nonzero, then slip is about to occur, and the same equation applies, substituting a for v. The Coulomb friction model is just a rough approximation for the micromechanics of contact, and there are many more detailed models. One common enhancement to the model is to define two friction coefficients, a static friction coefficient mu_s and a kinetic friction coefficient mu_k, where the static coefficient is larger than the kinetic coefficient. This friction law can be visualized as shown here. Larger friction forces are available to resist initial sliding, but once sliding is initiated, the friction coefficient drops. In the rest of this chapter, though, we will use the basic Coulomb friction law with a single friction coefficient. This model is attractive for its simplicity, and because it approximately captures the behavior of many dry surfaces in contact. The friction coefficient mu depends on both materials in contact, and typically ranges from values close to zero, when one of the materials is teflon or ice, to values around 1 when one of the materials is rubber. The set of all forces that can be transmitted through a Coulomb friction contact can be visualized as a friction cone. For a frame defined at the contact, the normal force f_z must be nonnegative, and the tangential force magnitude must be less than or equal to mu f_z. Looking at this cone from the side, we define the friction angle alpha, which is the inverse tangent of mu. This figure also represents the friction cone for a planar contact, and we can define f_1 and f_2 to be vectors along the friction cone edges. Then the set of all forces that can be transmitted through a planar contact is the positive span of f_1 and f_2. Unlike a planar friction cone, which can be represented as the positive span of two forces, a spatial friction cone cannot be represented as the positive span of a finite number of forces. For computational purposes, though, it's common to approximate a quadratic cone as a polyhedral cone defined as the positive span of four forces, where the z component of each force is 1 and the x or y component is mu or minus mu. The polyhedral cone is an underapproximation of the friction cone. To more closely approximate the quadratic cone, one could use more cone edges. Contact forces create moments about coordinate frames not at the contact point. To represent these moments, we can define a wrench cone that corresponds to the friction cone. For the coordinate frame shown here, and a planar friction cone which is the positive span of f_1 and f_2, the wrench cone includes the moments p cross f, where p is the contact point in the coordinate frame. The planar friction cone can be plotted as a wrench cone in the three-dimensional wrench space. The wrench cone is the positive span of the wrenches from the friction cone edges. The linear components of the wrench cone edges are mostly in the f_y direction, and the moments about the z-axis are negative. Adding another friction cone creates a corresponding wrench cone with positive moments. The set of all wrenches that can be transmitted through the two contacts is the positive span of the four wrenches at the edges of the friction cone. We call this a composite wrench cone, composed of wrenches due to multiple contacts. In the next video I'll introduce a convenient graphical representation of planar wrench cones, analogous to our graphical representation of planar twist cones.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_132_Omnidirectional_Wheeled_Mobile_Robots_Part_2_of_2.txt
Recall that the configuration of the chassis of a wheeled robot can be written q = (phi,x,y), the heading direction and position of the chassis. In the previous video, we derived the relationship u equals H-of-phi times q-dot, where u is the vector of wheel speeds. For any properly constructed omnidirectional robot, any q-dot can be achieved by a proper choice of wheel speeds. Because of this, motion planning and control of an omnidirectional wheeled robot is relatively simple. All the kinematic trajectory generation methods from Chapter 9, and most of the motion planners from Chapter 10, can be applied directly. Once a trajectory q-of-t has been planned, we can apply feedforward plus PI feedback control, as described in Chapter 11. The commanded chassis velocity q-dot is calculated as the sum of the desired chassis velocity at the current time instant plus feedback terms that are proportional to the current configuration error and the time integral of the error. Even simple proportional control can yield reasonable performance. For any controller, we need to estimate the chassis configuration q. The configuration can be estimated using odometry, covered later in this chapter, or using external sensors, like cameras, GPS, or laser range finders. Once the commanded chassis velocity q-dot is calculated, we use the kinematic model to calculate the wheel speeds. If the wheels have bounded speeds, then the motion planner should take these into account. We can transform bounds on the wheel speeds to bounds on the chassis twist in the body frame. Recalling the kinematic model expressed in terms of twists, bounds on the speed of wheel i create bounds on h_i-of-zero times the twist V_b. These two bounds define two parallel planes in the twist space, given by these two equations. Any twist between these two planes satisfies the wheel speed bound. Intersecting these feasible twists for each of the m wheels, the robot's feasible twists lie inside a convex polyhedron with m pairs of parallel faces. For the omniwheel robot, there are 3 wheels, so the feasible twists live inside a 6-sided polyhedron, as shown here. The intersection of this polyhedron with the plane of zero angular velocity is indicated and also shown in the figure below. The feasible linear velocities are bounded by a hexagon. For the robot with 4 mecanum wheels, the twist limits are described as an 8-sided polyhedron. The intersection of this polyhedron with the plane of zero angular velocity is a square. In the next video we move on to kinematic modeling of nonholonomic robots.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_1332_Controllability_of_Wheeled_Mobile_Robots_Part_1_of_4.txt
Controllability refers to the ability to drive a system from one state to another. For a kinematic model of a wheeled mobile robot, the state is just the configuration q of the chassis, with components phi, x, y. Consider an omnidirectional wheeled mobile robot with a goal configuration q_goal. A simple controller to drive the robot to the goal configuration is the proportional controller q-dot equals K times (q_goal minus q), where the feedback gain matrix K acts like a spring to pull q to q_goal. We could choose K to be the identity matrix, but as long as it's positive definite, the configuration error will decay to zero, as seen in this animation. We might as well choose the goal configuration as the origin, so this controller simplifies to q-dot equals minus K times q. This controller only works because the chassis velocity q-dot and the controlled wheel speeds u satisfy u equals H of phi times q-dot, where H is rank 3 as we learned in an earlier video. This means that any q-dot can be achieved by some choice of wheel speeds u. Therefore we can write the controller as q-dot equals minus K times q equals the pseudoinverse of H of phi times u, which we express more simply as q-dot equals nu. This is a simple example of a more general class of linear control systems x-dot equals A x plus B nu, where x is n-dimensional, nu is m-dimensional, A is n-by-n, and B is n-by-m. Systems such as this are said to be linearly controllable if they satisfy the Kalman rank condition: the rank of the matrix whose columns are given by the matrices B, AB, A-squared B, etc., is equal to n, the dimension of x. This condition ensures that the m controls can act on all n states. If a system is linearly controllable, it's possible to drive it between arbitrary states. To stabilize the origin, we can choose the feedback controller nu equals minus K times x, resulting in the dynamics x-dot equals A minus B K times x. For stability, we need to choose K so that the eigenvalues of A minus B K all have negative real components. Since we can write our omnidirectional mobile robot control system as q-dot equals nu, it's a simple example of a linear control system with A equal to zero and B the identity matrix. The identity matrix trivially satisfies the Kalman rank condition. The canonical nonholonomic mobile robot is not a linear control system, because the matrix G depends on the configuration q. We might still wonder if there is a simple control law that can stabilize a desired chassis configuration q_goal. Without proving it, I'll state a famous negative result: The system q-dot equals G of q times u, where the rank of G of zero is less than the dimension of q, cannot be stabilized to the origin by a continuous time-invariant feedback control law. For the canonical nonholonomic mobile robot, the rank of G is always 2, which is less than 3, the dimension of q. So, not only is there no control law that is linear in q that can stabilize a desired configuration, there isn't even a stabilizing control law that's continuous in q. The nonholonomic mobile robot is not linearly controllable, but in the next video, I'll define weaker notions of controllability, taken from nonlinear control theory, that apply to nonholonomic mobile robots.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_21_Degrees_of_Freedom_of_a_Rigid_Body.txt
The most fundamental question you can ask about a robot is, "Where is it?" The answer to this question is the robot's configuration, which is a specification of the positions of all the points of the robot. In this book, robots are constructed of rigid bodies, like this one. We often call these rigid bodies links. These links are connected together by joints, like this revolute joint of my tinkertoy robot. Since the links are rigid and have a constant shape, we typically only need a few numbers to represent the configuration of a robot. Compare to trying to represent the configuration of this pillow, which can be deformed in a wide variety of ways. Some robots, called soft robots, are like this pillow, but we don't cover soft bodies in this book. So, the configuration of a robot is a representation of the positions of all the points of the robot. The configuration space, which we often call the C-space for short, is the space of all configurations of the robot. The number of degrees of freedom is the dimension of the C-space, or the minimum number of real numbers you need to represent the configuration. As an example, this two-joint tinkertoy robot has two degrees of freedom, given by the angles of the two joints. I can visualize the angle of joint 2 as a point on a circle, and the angle of joint 1 as a point on another circle. To visualize the full C-space, let's rotate the circle for joint 1 to be perpendicular to the circle for joint 2. At each angle of joint 1, there is a circle of possible joint angles for joint 2, so I can replicate the joint 2 circle at every angle of joint 1. Therefore, the C-space of the two-joint robot can be visualized as the two-dimensional surface of a torus. Now, for every configuration of the robot, there is a unique point on the torus, and for every point on the torus, there is a unique configuration of the robot. As I mentioned earlier, the dimension of a robot's C-space is the number of degrees of freedom. Since a robot consists of rigid bodies, the number of degrees of freedom of a robot depends on the number of degrees of freedom of a rigid body. A rigid body in three-dimensional space has 6 degrees of freedom, but how do we determine that? First, let's choose the position of one point on the body; let's call that point A. The x-y-z coordinates of point A are three numbers. Next, we can choose the x-y-z coordinates of a second point B. But because this is a rigid body, we can't choose the three coordinates arbitrarily; B's constant distance to point A places one constraint on its location. The point B has to be somewhere on the surface of a sphere centered at A, and we only need two numbers to represent a point on a sphere, like latitude and longitude. Now that we've fixed points A and B, there are two constraints on point C: it has to be on the circle at the intersection of spheres centered at A and B. We only need one number to specify a point on a circle. Once we've fixed the location of points A, B, and C, provided they are not all on the same line, the body is fixed in space. Therefore, a rigid body has six degrees of freedom: three to specify the location of point A, two to specify point B, and one to specify point C. To summarize, let's count, for each point on the body, the number of coordinates, the number of constraints on those coordinates, and therefore the number of real freedoms in choosing each point. Point A has three coordinates, and no constraints on how we choose them. Point B has three coordinates, but they are subject to one constraint, so we only have two real freedoms. Point C has three coordinates, but they are subject to two constraints, so there is only one real freedom. All other points have three coordinates but are subject to three independent constraints, so there are no further freedoms. Thus a rigid body in space has six total degrees of freedom, three of which are linear, or x-y-z, and three of which are angles, sometimes called roll, pitch, and yaw. We could use the same process to learn that a rigid body in a two-dimensional plane has three degrees of freedom, two of which are linear and one of which is an angle. We could even study a rigid body in four-dimensional space, and learn that it has ten degrees of freedom, four of which are linear and six of which are angles. We can summarize what we've learned in this video with the following general rule, which holds for any system, not just rigid bodies: The dimension of the C-space, or the number of degrees of freedom, equals the sum of the freedoms of the points minus the number of independent constraints acting on those points. Since our robots are made of rigid bodies, we can express the number of degrees of freedom more simply as the sum of the freedoms of the bodies minus the number of independent constraints acting on the bodies. As an example, we can take the 6-degree-of-freedom spatial body and turn it into a 3-degree-of-freedom planar body by adding the three constraints that the z-coordinates of points A, B, and C are all equal to zero. In the next video we will use what we've learned to understand the number of degrees of freedom of a general mechanism.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_116_Hybrid_MotionForce_Control.txt
In the previous video I described force control, where the robot is capable of generating an end-effector wrench in any direction, as if it is buried in concrete. This is rarely the case, though; typically there are some directions the end-effector can move freely. In this video, we assume that the robot interacts with an environment consisting of rigid constraints, and the control objective is hybrid motion-force control: to control the motion in some directions and the forces in others. An example is opening a door. When the gripper firmly holds the door handle, the gripper's motion is constrained to one degree of freedom: rotation about the hinges of the door. The robot can freely control the velocity of the end-effector in this one degree of freedom. Simultaneously, the gripper can apply arbitrary wrenches in the five-dimensional space of wrenches consisting of moments about all three axes of the door handle and linear forces upward or through the hinges. Essentially, the constraints provided by the door hinges partition the 6-dimensional space of end-effector wrenches into a subspace of wrenches that cause motion of the door and a subspace of wrenches that push against the constraints. Another example is a robot erasing a whiteboard. The robot can freely control motion of the eraser in the x-y plane, and it can simultaneously control forces in the z-direction, into the board. We can express the end-effector twist and wrench in terms of their components, and we see that the three motion constraints provided by the board dictate which twist components and which wrench components the robot can control. The robot can control the eraser's angular velocity about the z-axis and the linear velocity along the x- and y-axes, and it can control the force along the z-axis and the moment about the x- and y-axes. Although the board provides inequality constraints that prevent motion into the board but not motion away, I'll treat the constraints as equality constraints, for simplicity: the eraser can neither move into nor away from the board. Also, I'll ignore friction at the constraints. The constraints on the end-effector's twist V_b can be expressed as A of theta times V_b equals zero. These are Pfaffian velocity constraints expressed in terms of the end-effector twist. In Chapter 8, we derived the task-space dynamics of a robot without constraints. If we add k end-effector motion constraints, where k is five for the door example and three for the whiteboard example, then the constrained dynamics consists of wrenches that cause motion of the end-effector and wrenches that cause end-effector forces, F_tip. This follows our development of constrained dynamics from Chapter 8. The k rows of the A matrix form a basis for the space of wrenches that can be applied to the environment, and the k-vector of coefficients lambda determines the particular wrench in this space. The desired end-effector wrench specified to the hybrid motion-force controller should be a linear combination of the rows of the A matrix, and the desired motion should satisfy the velocity constraints at all time. We can define a projection matrix P of theta, following the development in Chapter 8, that projects an arbitrary end-effector wrench F_b to the portion of that wrench that causes motion of the end-effector. The complement of that projection, the identity matrix minus P, extracts the portion of the wrench that acts against the constraints. Skipping the derivation of P, which can be found in the book, we get this equation for the 6-by-6 matrix P, where the rank of the P matrix is 6 minus k. In other words, 6 minus k wrench directions cause robot motion and k directions cause constraint forces. If the environment has moving masses, as with the door, the mass properties of the environment should be incorporated in the dynamics. With all of this as background, we can express our hybrid motion-force controller in this form: the wrench created by the end-effector is the sum of the wrench specified by the force control law, as we studied in the last video, and the wrench specified by the task-space motion control law, but only after those wrenches have been projected to their appropriate subspaces. This allows you to design the force and motion control laws as if each is acting independently, because we use the projection P to throw away any wrench components specified by the motion controller that would cause constraint forces, and we use the projection I minus P to throw away any wrench components specified by the force controller that would cause motion of the robot. Once we have calculated the end-effector wrench F_b, we calculate the commanded joint forces and torques using the Jacobian transpose. This is the idea behind hybrid motion-force control, but actual implementation involves a number of important details, such as estimating the constraints that the robot is actually subject to. So this completes the videos for Chapter 11 on robot control. So far in this book, we have analyzed the kinematics of robots, computed their dynamics, planned motions, and constructed feedback controllers to achieve desired motions and forces. In Chapter 12 we move outward from the robot itself to its interaction with objects in its environment. In short, the focus is on manipulation rather than the manipulator.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_94_TimeOptimal_Time_Scaling_Part_1_of_3.txt
In the next few videos, we consider the following problem: Given a desired path theta-of-s, find the time-optimal time scaling along this path, considering the dynamics of the robot and torque limits at the robot joints. Minimum-time motions can be used to maximize the productivity of a robot. You could imagine trying to optimize other criteria, like the amount of energy consumed by the actuators, but in these next few videos we will focus on time-optimal trajectories. Recall that the dynamics of a robot can be written as M times theta-double-dot plus a velocity-product term plus a gravity term equals the joint force and torque vector tau. Here I've written the velocity-product term using the Christoffel symbol matrix Gamma to emphasize that it is quadratic in the joint velocity vector. Keeping in mind that we are only interested in the dynamics of the robot when it is on the path theta-of-s, we can rewrite theta-dot and theta-double-dot as a function of the derivatives of the path with respect to s and the derivatives of s with respect to time. Plugging these expressions into the dynamics, we get the expression shown here. Since the path is given in advance, the derivatives of theta with respect to s are also known in advance, and only s, s-dot, and s-double-dot are variables. We can therefore write this equation as the vector equation m-of-s times s-double-dot plus c-of-s times s-dot-squared plus g-of-s equals tau. Each of m-of-s, c-of-s, and g-of-s is a vector function of s, where c-of-s times s-dot-squared is a velocity-product term, g-of-s is the gravity term, and m-of-s plays the role of a mass. Some elements of the m-vector may be negative, however. This equation is the dynamics of the robot when it is restricted to move along the path theta-of-s. This equation says nothing about the dynamics when the robot is off the path. Now that we've expressed the dynamics in terms of the single path parameter s, as opposed to the joint vector theta, we have to consider the limits on the forces or torques that the robot's actuators can produce. The limits at the i'th joint can be written as tau_i is greater than tau_i-min and less than tau_i-max. For example, tau_i-min could be minus-five newton-meters and tau_i-max could be plus-five newton-meters. But in general the limits are a function of theta and theta-dot. In particular, the maximum torque that can be produced by an electric motor typically decreases as the velocity increases, until eventually it becomes zero. Remembering that we can express theta and theta-dot in terms of s and s-dot when the robot is restricted to the path, we can rewrite the actuator limits as a function of s and s-dot. If we substitute the i'th component of the path-restricted dynamics in for tau_i, we get these constraints. The i'th actuator therefore places limits on the possible accelerations s-double-dot along the path when the robot is at the state (s, s-dot). To determine the i'th actuator's limits on s-double-dot, we subtract c_i times s-dot-squared and g_i from all three expressions, then divide by m_i. Since m_i could be positive or negative, there are two possible cases: if m_i is positive, then L_i, the lower limit on s-double-dot, and U_i, the upper limit on s-double-dot, are given by these equations. If m_i is negative, then L_i and U_i are given by these equations. These equations tell us the maximum and minimum accelerations s-double-dot along the path that joint i will allow at the state (s, s-dot). If we calculate L_i and U_i for all the joints, then L of (s, s-dot), the minimum feasible acceleration s-double-dot at the state (s, s-dot), is just the maximum of the lower limits over all the joints. Similarly, U of (s, s-dot), the maximum feasible acceleration s-double-dot, is just the minimum of the upper limits over all the joints. We can now express the constraints on the robot's acceleration along the path compactly as s-double-dot is greater than L of (s, s-dot) and less than U of (s, s-dot). At some states, L may actually be greater than U, and in this case there is no feasible acceleration that keeps the robot on the path. In other words, if the robot found itself at such a state, it would immediately have to leave the path; the actuators are not strong enough to keep the robot on the path. This typically happens when the robot is moving at high speed. Now that we have reduced the actuator constraints to s-double-dot is greater than L and less than U, we can mathematically express the time-optimal scaling problem as follows: Given a path theta-of-s, the initial state where both s and s-dot are zero, and the final state where s is one and s-dot is zero, we want to find a monotonically increasing twice-differentiable time scaling that (a) satisfies the terminal conditions and (b) minimizes the total travel time capital T while satisfying the actuator constraints. This problem lends itself to a nice graphical interpretation, as we will see in the next video.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_322_Angular_Velocities.txt
Let's say that R is the rotation matrix of a frame {b} relative to a frame {s}. A simple idea is to define the rate of rotation of {b}, which is also called the angular velocity, to be R-dot, the time rate of change of R. But this has 9 variables, and we should be able to find a good representation of angular velocity using only 3 variables. Unlike the curved space of orientations, SO(3), represented here as a sphere, at any given orientation, the space of angular velocities is a flat 3-dimensional vector space tangent to SO(3) at that orientation. A 3-dimensional vector space can be represented globally, without any singularities, by three coordinates. This tinkertoy coordinate frame represents the body frame {b}. Now imagine a rotation axis passing through the origin, and the motion of the frame as it rotates about that axis according to the right-hand rule. Any angular velocity can be represented by a rotation axis and the speed of rotation about it. We can express the axis as a unit vector in the {s} frame, writing it as omega-hat_s. The hat means that the vector has unit length. We call the rate of rotation theta-dot, and we can multiply the unit axis omega-hat_s by the rate of rotation theta-dot to get the angular velocity vector omega s, expressed in the {s} frame. As the frame rotates about the axis, the b-frame x-axis traces out a circle. The linear velocity of the x-axis is in a direction tangent to this circle, and is calculated as omega_s cross x-hat_b. A similar relationship holds for the other two coordinate axes. Since we will often take the cross product of a vector with another vector, we define a bracket notation that allows us to write x crossed with y as bracket-x times y, where bracket-x is a 3 by 3 matrix representation of the 3-vector x. The matrix bracket-x is called a skew-symmetric matrix because bracket-x is equal to the negative of its transpose. The set of all 3 by 3 skew-symmetric matrices is called little so(3), due to its relationship to big SO(3), the space of rotation matrices. With the bracket notation, we can write the relationship between R-dot and the angular velocity omega_s as R-dot = bracket omega_s times R. The angular velocity vector can be expressed in other frames, not just the {s} frame. For example, we could write it in the {b} frame coordinates. Using our change of reference frame subscript cancellation rule from the previous video, we get omega-b equals R_bs times omega_s, or R_sb inverse times omega-s. R usually indicates the body frame relative to the space frame, so we can drop the subscripts and write the relationship between the body angular velocity and spatial angular velocity as omega_b equals R inverse times omega_s and omega_s equals R times omega_b. We will also find the little so(3) matrix representation of the angular velocity to be useful. In the next video we will begin to learn how to integrate a constant angular velocity for a given time to find a rotational displacement.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_1122_Linear_Error_Dynamics.txt
The error dynamics describes the combined dynamics of the robot and the controller in response to a reference input. As we saw in the last video, an error response could look something like this. Here the error response has zero steady-state error and lots of overshoot and oscillation. This plot also happens to be the position response of a mass-spring-damper, where the position of the mass is theta_e. As we will see soon, designing a controller is much like choosing the spring constant k and the damping constant b. If we choose the stiffness k to be large, the spring pulls the error toward zero faster, and if we choose the damping constant b to be large, we can get rid of overshoot and oscillation. The motion of the mass, and the error dynamics, is described by this linear ordinary differential equation: the mass times acceleration plus the damping times the velocity plus the stiffness times the position is equal to zero. This is called a second-order differential equation, since the second derivative of theta_e appears. More generally, in this chapter we will consider error dynamics that look like this p-th order differential equation, depending on p derivatives of theta_e. The right-hand side of this equation is a nonzero constant c, and such a differential equation is called nonhomogeneous. If c is equal to zero, as it was for our mass-spring-damper example, then the differential equation is homogeneous. We can divide both sides by the coefficient a_p to get our preferred form of a homogeneous differential equation, with coefficients a^prime. We could also write this equation in this equivalent form. This single p-th order differential equation can be expressed as p first-order differential equations. Let's define the state vector x, consisting of variables x_1 to x_p. x_1 is theta_e, x_2 is theta_e-dot, etc. The rate of change of x_p-dot is given by this equation, which is equivalent to the single p-th order differential equation we saw earlier. With our definition of the state vector x, we can write the p-th-order differential equation as a first-order vector differential equation x-dot equals A-x, where A is a p-by-p matrix. The solution to this vector differential equation is given by the matrix exponential e to the A-t, as we saw in Chapter 3. Since A is not an element of little so(3) or little se(3), the analytic solutions that we saw in Chapter 3 do not apply. Instead, to understand the character of the error response, we will study the eigenvalues of the A matrix. These eigenvalues determine whether an initial error, x at time 0, grows or shrinks with time. If x and A are both scalars, then the error shrinks as a decaying exponential if A is negative, and it grows exponentially if A is positive. The generalization of this observation to the case where x is a vector and A is a matrix, whose eigenvalues are complex numbers in the general case, is that the error is only guaranteed to decay to zero if the real components of all the eigenvalues are negative. In other words, if the real components of all the eigenvalues of the matrix A are negative, then the error dynamics are stable. The eigenvalues s of A are given by the roots of the characteristic equation of A, the determinant of s times the identity matrix minus A equals zero. Therefore we often refer to the eigenvalues as roots. The form of this characteristic equation comes directly from the original p-th-order differential equation, replacing the i-th derivative of theta_e with s-to-the-i. A necessary condition for all the roots to have a negative real component is that all the coefficients a^prime are positive. This condition is also sufficient for stability for first- and second-order error differential equations, but not for third-order or higher. In the coming videos, we will return to the second-order error dynamics we saw at the beginning of this video, as it is the simplest error differential equation that exhibits overshoot and oscillation, which are common behaviors in controlled systems.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_7_Kinematics_of_Closed_Chains.txt
In Chapters 4, 5, and 6, we studied the forward kinematics, velocity kinematics and statics, and inverse kinematics of open-chain robots. In Chapter 7, and in this single video, I am going to cover all of these topics for closed-chain robots, without going into great detail. Kinematics and statics are generally more complicated for closed-chain robots, because there is such a wide variety of design possibilities. The configuration space of closed-chain robots can be quite complex, since they must satisfy a number of loop-closure equations. There are classes of singularities that don't exist for open-chain robots, and the choice of which joints to actuate and which to leave passive can affect the singularities that occur. Oftentimes the analysis of these robots is based on symmetries and insight into the specific structure of the mechanism. In this chapter we take an example-based look at some of these issues. The study of closed-chain robots is an active research area, and this chapter just skims the surface. Let's start by looking at some examples. The first example is a 4-degree-of-freedom robot arm. The end-effector moves in x, y, and z, and it rotates about a vertical axis. Although it looks similar to an open-chain robot, it is a closed chain due to the parallelogram-type linkage. The next example is a 4-degree-of-freedom Delta robot. The end-effector moves in x, y, and z, and it rotates about a vertical axis. There are also 3-degree-of-freedom Delta robots that eliminate the rotational motion. The final example is the Stewart platform, which moves with the full 6 degrees of freedom of a rigid body. Each of the 6 legs is actuated by a prismatic joint. At one end of each leg is a spherical joint while the other end has a spherical or universal joint. The Stewart platform is popular for applications like aircraft simulators, since it can move the virtual cockpit with all 6 degrees of freedom. The Delta robot and the Stewart platform are examples of parallel robots. A parallel robot is a specific type of closed chain which consists of a moving platform attached to a base through a set of actuated legs. For the rest of this video, I will focus on parallel robots. Let's summarize some typical characteristics of open-chain and parallel robots. For an open-chain robot, typically each joint has a motor driving it. For parallel robots, many of the joints are unactuated. Open-chain robots tend to have a large workspace, since each extra joint adds to the possible motion of the end-effector. Parallel robots tend to have a small workspace, since each leg in parallel places constraints on the motion of the platform. Each joint of an open-chain robot has to support all of the end-effector force, so open-chain robots tend to be relatively weak. Also, flexibility at the joints and links tends to add. Parallel robots, on the other hand, tend to be stiff and strong, since the end-effector force is distributed among the legs. As we saw in chapter 4, the forward kinematics mapping joint values to end-effector configurations is relatively easy to evaluate for open-chain robots using the product of exponentials. On the other hand, there may be multiple solutions to the forward kinematics for parallel robots, and finding them can be challenging. Finally, as we saw in chapter 6, solving the inverse kinematics for an open-chain robot can be tricky. There may be multiple solutions, and numerical methods may be required to find them. The inverse kinematics of a parallel robot is sometimes straightforward, as we will see. To solidify our understanding of these characteristics, let's use the Stewart platform as an example. The fixed frame is {s} and the end-effector frame is {b}. The configuration of the {b} frame relative to the {s} frame is T_sb-of-theta, where theta is the vector of joint variables representing the leg lengths. For the ith leg, theta_i is the length of the leg. a_is is the vector from the {s}-frame to leg i's joint at the base, measured in the {s}-frame, and b_ib is the vector from the {b}-frame to the top joint of leg i, measured in the {b}-frame. We can transform b_ib to the {s}-frame by premultiplying by the desired end-effector configuration T_sb, provided we represent the vectors in homogeneous coordinates. Now we can calculate the prismatic joint value theta_i as the distance between b_is and a_is. Inverse kinematics is easy for the Stewart platform. If the legs of the parallel robot are more general open chains, then we have to solve an inverse kinematics problem for each leg. Next let's address the inverse velocity kinematics mapping the end-effector twist to the joint velocities. Let v-hat_i be the unit 3-vector aligned with the direction of positive motion of the i-th axis. Skipping the straightforward derivation, we can define a screw axis V_i, expressed in the {s}-frame, with the linear component v-hat_i and the angular component a_is cross v-hat_i. Then the joint velocity theta-dot_i is equal to the screw axis V_i dotted with the spatial twist V_s; this calculates the component of V_s along the joint axis. Repeating this analysis for all the legs, we can write the ith row of the inverse of the space Jacobian, or J_s-inverse, as the screw axis V_i-transpose. Now if the Jacobian-inverse is invertible, we have the velocity kinematics and statics in the {s} frame: the spatial twist V_s equals J_s times theta-dot and the joint forces tau equals J_s-transpose times F_s, the wrench applied by the end-effector. One of the difficulties of analyzing closed-chain robots, however, is understanding all the possible singularities where the Jacobian is not invertible. Let's consider a simpler robot, the 3-by-RPR parallel mechanism, which is the planar analog of the Stewart platform. The platform moves in all three planar degrees of freedom and is driven by three legs. Each leg has two unactuated revolute joints and one actuated prismatic joint. If we put the robot at this configuration, it is at a singularity. From this configuration, if we extend the legs at an equal rate, the platform could either rotate counterclockwise or clockwise, and we cannot predict which. Closed-chains can be subject to several types of singularities, described in detail in the book, some of which have no analogs in open-chain robots. Examples include configuration-space singularities, actuator singularities, and end-effector singularities. Some of these singularities occur at configurations where the constraint Jacobian, which is the matrix of derivatives of the loop-closure equations with respect to the passive and actuated joint variables, loses rank. Lastly, we address the forward kinematics problem for closed-chains, which was the first problem we addressed for open chains. The forward kinematics problem often involves solving one or more complex nonlinear equations, and in general the forward kinematics has multiple possible solutions. The 3-by-RPR robot can have up to 6 possible end-effector configurations given a set of prismatic joint extensions. This figure shows two possible solutions when all joint extensions are equal. The 6-dof Stewart platform can have up to 40 solutions for a given set of leg extensions. For a given set of leg extensions, usually there are far fewer real solutions. In practice, it is common to use iterative numerical methods with a nearby solution as an initial guess, similar to the Newton-Raphson method we developed for the inverse kinematics of open chains. In this video I have given you a quick summary of the topics of Chapter 7, which itself is a quick summary of the kinematic analysis of closed-chain robots. The design and analysis of closed-chain robots is an active research field, but Chapter 7 should give you a good idea of some of the key issues. So, as Chapter 7 concludes, so does our kinematic and static analysis of robots. In Chapter 8 we will begin our study of robot dynamics, which governs how a robot moves when forces and torques are applied at joints. This will be our springboard to advanced topics, like the design of time-optimal trajectories and controllers for robots.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_22_Degrees_of_Freedom_of_a_Robot.txt
In the previous video, we learned that the number of degrees of freedom of a robot is equal to the total number of freedoms of the rigid bodies minus the number of constraints on their motion. The constraints on motion often come from joints. The most common type of joint is the revolute joint. It places 5 constraints on the motion of the second spatial rigid body relative to the first, and therefore the second body has only one degree of freedom relative to the first body, given by the angle of the revolute joint. Another common joint with one degree of freedom is the prismatic joint, also called a linear joint. We can also have joints with more than one degree of freedom, like this universal joint, which has two degrees of freedom. The spherical joint, also called a ball-and-socket joint, has three degrees of freedom: the two degrees of freedom of the universal joint plus spinning about the axis. This table summarizes the previous four joints, plus two other types of joints, the one-degree-of-freedom helical joint and the two-degree-of-freedom cylindrical joint. This table shows the number of degrees of freedom of each joint, or equivalently the number of constraints between planar and spatial bodies. Using this table of freedoms and constraints provided by joints, we can come up with a simple expression to count the degrees of freedom of most robots, using our formula from Chapter 2.1. Let's say the robot has N links. By historical convention, N includes ground as a link. The robot has J joints. And we define m to be the degrees of freedom of a single body, so m equals 3 for a rigid body moving in the plane and m equals 6 for a rigid body moving in 3-dimensional space. We can write our equation in terms of these variables: N-1 is the number of links other than ground, and m times N-1 is the total number of freedoms of the bodies if they are not constrained by joints. Then we subtract off the constraints provided by the J joints. Since the number of constraints provided by joint i is equal to m minus the number of freedoms allowed by joint i, we can replace ci by m minus fi and rewrite the equation like this. Rearranging once more, we get this. This is called Grubler's formula, and it assumes that the constraints provided by the joints are independent. Let's apply Grubler's formula to a few mechanisms. The first mechanism is called a serial, or open-chain, robot, because there is a single path from ground to the end of the robot. It's called a 3R robot, meaning it has three revolute joints. This planar robot has, m=3, N=4, J=3, and one freedom at each joint. Grubler's formula tells us, 3(4-1-3)+3=3. The robot has 3 degrees of freedom, as we expect. The next mechanism is called a four-bar linkage, obtained by pinning the endpoint of the 3R robot to a particular location in the plane. This is called a closed-chain mechanism, because there's a closed loop. As before, we have, m=3 and N=4, but now we have J=4 joints. Grubler's formula tells us that this mechanism has, 3(4-1-4)+4, is equal to one degree of freedom. We would also predict this by the fact that pinning the endpoint of the 3R robot to a particular x-y location creates two constraints, so we can subtract 2 from the 3 freedoms of the 3R robot to see that there is one degree of freedom. The next mechanism is like the four-bar, except now it adds one more link and two more joints. Grubler's formula would tell us that this mechanism has zero degrees of freedom, but that's wrong; it still has one degree of freedom, just like the four-bar. The reason that Grubler's formula does not apply is that the joint constraints are not independent. Testing whether joint constraints are independent is not an easy task, and we won't pursue it further. Finally, we have a spatial closed-chain mechanism called a Stewart platform. It has 6 legs connecting the bottom platform to the top platform, and each leg consists of two links and a universal joint, a prismatic joint, and a spherical joint. The prismatic joints are actuated, creating motion of the top platform as you see in the video. Since each leg has 2 links, there is a total of 12 links in the legs, and adding ground and the top platform makes 14 links total. Each leg has 3 joints with 6 degrees of freedom total, for a total of 18 joints with 36 total freedoms. The mechanism moves in 3-dimensional space, making m equal to 6. Grubler's formula tells us the Stewart platform has, 6(14-1-18)+36, is equal to 6 degrees of freedom. The top platform can be moved with all 6 degrees of freedom of a rigid body. There are limits to the range of motion, of course, but these limits do not reduce the number of degrees of freedom. In the next video we will explore another important property of a configuration space: its topology.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_87_Constrained_Dynamics.txt
We've been discussing the dynamics of open-chain robots. If the robot's motion is subject to constraints, however, like nonholonomic constraints due to wheels or the loop-closure constraints of parallel robots, such as the Stewart platform, we have to add forces that enforce the constraints. Another example of a robot subject to constraints is a humanoid robot. Here, the feet in contact with the ground create a closed loop, and if the robot grips the box tightly, there is another closed loop through the arms. These closed loops have to be enforced by constraint forces. Finally, an open-chain robot erasing a whiteboard is another example of a robot subject to constraints. We can write the constraints on the robot configuration as the vector equation b of theta of t equals zero. These configuration constraints keep the eraser pressed against the board. Since these constraints are satisfied at all times, the time derivative of b must also be zero. By the chain rule, the time derivative can be expressed as the matrix of partial derivatives of b with respect to theta times theta-dot, or A of theta times theta-dot for short, where the A matrix is k-by-n, representing k velocity constraints on the n joints of the robot. These Pfaffian velocity constraints can also represent nonholonomic constraints, as discussed in Chapter 2. We assume these k equality constraints are workless, meaning that the forces that enforce these constraints do no work on the robot. For the example of the robot erasing the board, this means that there is no friction between the eraser and the board. Without constraints, these are the equations of motion of the robot. With constraints, the robot joint forces and torques tau may include forces against the constraints, tau_con. Thus the joint torques can be separated into components that move the robot and components that act against the constraints. Since the constraints are workless, the dot product of the torques against the constraints with the joint velocities must be zero. We also know that the velocity constraints have the form A theta-dot equals zero, so therefore the constraint torques must be a linear combination of the rows of A, where the k-vector of coefficients lambda is called a vector of Lagrange multipliers. With this observation, we can rewrite our dynamics in this form. Since the velocity constraints must be satisfied at all times, they can be expressed as constraints on the acceleration. These are now n-plus-k equations in n-plus-k variables, the k Lagrange multipliers and either n joint accelerations or n joint torques, depending on whether we are solving the constrained forward dynamics or the constrained inverse dynamics. Skipping the derivation, which is given in the book, we can eliminate the k Lagrange multipliers by defining an n-by-n projection matrix P of theta equal to the n-by-n identity matrix minus A-transpose times the inverse of A M-inverse A-transpose times A M-inverse. The rank of this n-by-n matrix is n-minus-k. Using this projection matrix, we can define the constrained inverse dynamics, P times tau equals P times M theta-double-dot plus h. Since P is not invertible, we cannot premultiply both sides by P-inverse to get the unconstrained inverse dynamics. Instead, P projects the joint torques tau to the joint torques that move the robot, eliminating the joint torques against the constraints that cause no motion of the robot. To solve the inverse dynamics, we plug in the joint positions, velocities, and accelerations on the right-hand side to calculate the joint torques that create the desired joint accelerations. To this solution, we can add any joint torques of the form A-transpose lambda, which create forces against the constraints and do not affect the motion of the robot. This will be useful in Chapter 11 when we discuss hybrid motion-force control, where we control the robot to achieve a desired motion in the unconstrained directions and to achieve desired end-effector forces in the constrained directions.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_321_Rotation_Matrices_Part_2_of_2.txt
There are three common uses of a rotation matrix: The first is to represent an orientation. The second is to change the frame of reference of a vector or frame. And the third is to rotate a vector or frame. To demonstrate these, I will use these three coordinate frames, representing the same space with different orientations. To help you visualize these frames in 3 dimensions, I�ll use my handy tinkertoy frame. This is the z-axis, this is the x-axis, and this is the y-axis. So initially I�ll orient it aligned with the frame {s}. And then I�ll rotate by 90 degrees about the z-axis, and then I get the frame {b}, and then if I rotate by -90 degrees about the y-axis, I get the frame {c}. As we saw in a previous video, we can represent {c} relative to {s} by writing the coordinate axes of {c} in {s} coordinates, yielding the rotation matrix R_sc. If we write the coordinate axes of {s} in {c} coordinates, the resulting rotation matrix R_cs is just the transpose, or inverse, of R_sc. To demonstrate a change of reference frame, consider the rotation matrix R_bc, representing the orientation of frame {c} in frame {b} coordinates. If we want to express the {c} frame in {s} coordinates instead of {b} coordinates, we can perform the matrix multiplication R_sc equals R_sb times R_bc. By premultiplying R_bc by R_sb, we've changed the representation of the {c} frame from the {b} frame to the {s} frame, as we can verify by inspecting the rotation matrices. You can remember the change of reference frame operation by a subscript cancellation rule: if the second subscript of the first matrix matches the first subscript of the second matrix, they cancel each other, leaving the two remaining subscripts in the right order. We can also change the frame of reference of a vector. Let p_b be the position of point p when expressed in {b} frame coordinates. To express p in {s} coordinates, we can premultiply p_b by R_sb to get p_s. This operation again satisfies a subscript cancellation rule. The final use of a rotation matrix is to rotate a vector or frame. For example, it is apparent that the {b} frame is obtained from the {s} frame by rotating the {s} frame about the z_s axis by 90 degrees. Thus we could consider the matrix R_sb as an operation that rotates about the z-axis by 90 degrees. If we premultiply a vector pb by this rotation operator, we just get a change of reference frame to {s} coordinates, as we saw before. But if the vector is p_s in {s} coordinates, then there is no subscript cancellation, and instead we get a new vector p-prime-s, obtained by rotating p_s by 90 degrees about the z_s axis. The vector has been rotated, but it is still represented in the original frame {s}. We can also rotate the frame c by premultiplying or postmultiplying R_sc by the rotation operator R. If you premultiply by R, the rotation axis is interpreted as the z-axis of the frame of the first subscript, {s}. You end up with a rotated frame {c-prime}, still expressed in {s}. If you postmultiply by R, the rotation axis is interpreted as the z-axis of the frame of the second subscript, {c}. You end up with a different rotated frame {c-double-prime}, still expressed in {s}. In summary, a rotation matrix has three uses: representing an orientation, changing the frame of reference of a vector or a frame, and rotating a vector or a frame. In the next video, we will learn how to represent the angular velocity of a frame.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_83_NewtonEuler_Inverse_Dynamics.txt
In this video we use the inverse dynamics of a rigid body that we derived in the last video to derive the Newton-Euler inverse dynamics algorithm for an open-chain robot. Consider an n-link robot with an end-effector. Each link is a rigid body, and the center of mass of each link is shown. We assign frames {1} through {n} at the centers of mass of the n links, as well as a frame {n+1} at the end-effector and a frame {0} fixed in the world. We define V_i to be the twist of link i expressed in frame {i}. With these definitions, I'll quickly summarize the algorithm. We'll come back to the details later. First, we perform the forward iterations, which calculate the configuration, twist, and acceleration of each link, starting from link 1 and moving outward. Given the vectors of joint positions, velocities, and accelerations, and starting from link 1, we calculate the twist V_i of link i as the sum of the twist of link i-minus-1, but expressed in the {i} frame, and the added velocity due to the joint velocity theta_i-dot. Then we calculate the acceleration of link i as the sum of the acceleration of link i-minus-1, but expressed in the {i} frame, plus the added acceleration due to the joint acceleration theta_i-double-dot, plus a velocity-product term due to theta_i-dot and the twist V_i. After the forward iterations are completed, we have the configuration, twist, and acceleration of each link. Now we perform the backward iterations, calculating the required joint forces and torques starting from joint n and moving back to joint 1. First we calculate the wrench F_i required for link i as the sum of the wrench F_i-plus-1, which is the wrench needed at link i-plus-1 but expressed in the {i} frame, plus the wrench needed to accelerate link i, using the inverse dynamics of a rigid body derived in the previous video. Then we calculate tau_i as the component of the wrench F_i along the joint screw axis. Only that portion of the wrench has to be applied by the joint motor; the rest of the wrench is provided passively by the mechanical structure of the joint, such as the bearings. At the end of the backward iterations, we have calculated all the joint forces and torques needed to create the desired joint accelerations at the current joint positions and velocities. That is all there is to it; the rest of this video is just filling in the details. So, formally, the recursive Newton-Euler inverse dynamics algorithm calculates tau given the joint positions, velocities, and accelerations, as well as the wrench F_tip that the robot end-effector applies to the environment. We define M_i, i minus 1 to be the transform defining the frame {i-1} relative to frame {i} when joint i is at its zero position. We define A_i to be the screw axis of joint i expressed in the frame {i}. We define the wrench F_n-plus-1 to be the wrench F_tip applied by the end-effector. Finally, to model gravity, we define the acceleration of the base of the robot, V_zero-dot, to be a linear acceleration opposite the gravity vector. This is because gravity is indistinguishable from upward acceleration. With these definitions, the forward iterations, from frame {1} to frame {n}, can be written as follows: First, the configuration of frame {i-1} relative to frame {i} is given by the formula shown here. Next, the twist of link i is the sum of the twist of link i-minus-1, but expressed in the frame {i} using the matrix adjoint of T_i,i-minus-1 calculated in the first step, plus the added twist due to the joint velocity theta_i-dot times the joint screw axis A_i. Finally, the acceleration of link i is the sum of the acceleration of link i_minus-1 expressed in the {i} frame, plus an acceleration due to a velocity-product term consisting of the Lie bracket of the twist V_i from the previous step and the joint velocity times the joint screw axis A_i, plus an added acceleration of the joint acceleration times the joint screw axis A_i. The derivations of these equations can be found in the book. At the end of the forward iterations, we have the configurations, twists, and accelerations of all the links. The twists and accelerations are expressed in the center-of-mass frames {i}. Now we begin the backward iterations, from frame {n} to frame {1}. First we calculate the wrench F_i required by link {i} as the sum of the wrench required by link {i+1}, but expressed in frame {i}, plus the wrench required by link {i} according to the inverse dynamics of a rigid body we derived in the previous video. Finally, we calculate the joint torque tau_i by projecting the wrench F_i on to the screw axis A_i. We now have the vector tau of all joint forces and torques needed for a given theta, theta-dot, theta-double-dot, and end-effector wrench F_tip. One advantage of this algorithm is that it involves no differentiation. Another is that it is computationally efficient due to its recursive nature, where calculation of link i's twist and acceleration uses link i-minus-1's twist and acceleration, and calculation of link i's wrench and joint torque uses link i-plus-1's wrench and joint torque. The inverse dynamics are useful for robot control. For simulation, however, we need to solve the forward dynamics. In the next video, I will demonstrate one way to use the Newton-Euler inverse dynamics algorithm to solve the forward dynamics.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_12_Grasping_and_Manipulation.txt
In this chapter we focus on robot manipulation. One example of manipulation is grasping and carrying, and questions we could pose include: "How many fingers are needed to grasp the object firmly?" and "Where should the fingertips be placed?" We answer these questions in this chapter. Grasping is attractive because, once we have a firm grasp of the object, then it follows the hand exactly, and controlling the motion of the object is as easy as controlling the motion of the hand. But manipulation is much more than just grasping and carrying. Manipulation occurs whenever a robot applies motions or forces to purposefully change the state of an object, and manipulation primitives include pushing, kicking, throwing, tapping, sliding, rolling, pivoting, toppling, and others. These manipulation primitives allowing the robot to manipulate objects too large to be grasped or too heavy to be carried. They also allow a robot to manipulate several objects simultaneously. To automate planning and execution of robot manipulation, we need an understanding of the mechanics of contact. For example, to plan how to push an object on the floor, the robot should be able to predict whether the object will stay fixed to the pusher, or move relative to it. The robot should also be able to predict if a pushed object will slide, or if it will topple over. If a robot waiter carries a tray of glasses, it needs to know the motion constraints that keep the glasses from falling. Also, a robot that can reason about friction can use vibration to manipulate several sliding parts on a flat plate. With a good understanding of contact mechanics, we can solve the riddle of the meter stick trick. The center of mass of this stick is at its center. If I support the stick by two fingers, with one finger close to the center of mass, and I move that finger toward the stationary finger, what happens to the stick? Does it fall? No, in fact the stick moves so that the center of mass always stays between the two fingers. You'll be able to predict this behavior using the tools in Chapter 12. We assume that the objects are rigid bodies. To analyze manipulation of rigid bodies, we need 3 ingredients: First, contact kinematics tells us how a contact between two rigid bodies constrains the motion of each. Second, we need a model of forces that can be transmitted through a contact, including frictional forces. Third, rigid-body dynamics, as we studied in Chapter 8, tells us the relationship between forces and motions of rigid bodies. If motions are slow, then we can assume that velocity and acceleration terms are negligible, and therefore contact forces and gravity forces must balance. This is called the quasistatic assumption. Chapter 12 focuses on the first two topics, and applies the ideas to several different manipulation problems. Throughout this chapter, I'll be talking about linear combinations of vectors, so let's define the linear span, positive span, and convex span of a set of vectors. Let's define A as a set of vectors a_1 through a_j in an n-dimensional space, drawn as arrows emanating from an origin. In the drawing here, the vectors live in a 2-dimensional space. Then the linear span of A is the set of all linear combinations of these vectors. For the three vectors shown here, the linear span is the entire two-dimensional space; any point in the plane can be obtained by a linear combination of these vectors. In fact, any point can be represented as a linear combination of any two of these vectors. Next we define the positive span, also called the nonnegative span or the conical span. It's defined as the set of all linear combinations where the combination coefficients are nonnegative. All points inside the cone shown can be obtained by a nonnegative linear combination of the vectors. In fact, we could get rid of the vector inside the cone, since it doesn't change the positive span. Finally, we define the convex span, where the coefficients are all nonnegative and sum to one. The convex span is indicated by the triangle and its interior. Clearly the convex span is a subset of the positive span which is a subset of the linear span. The following facts will also be useful: The space R^n can be linearly spanned by n vectors, but no fewer, and the space R^n can be positively spanned by n+1 vectors, but no fewer. In the next video we begin our study of contact kinematics.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_81_Lagrangian_Formulation_of_Dynamics_Part_1_of_2.txt
In Chapter 8, we study the dynamics of open-chain robots. For example, the forward dynamics problem is to calculate the joint accelerations theta-double-dot given the current joint positions theta, the joint velocities theta-dot, and the forces and torques tau applied at each joint. The forward dynamics is useful for simulation. The inverse dynamics problem is to find the joint forces and torques tau needed to create the acceleration theta-double-dot for the given joint positions and velocities. The inverse dynamics is useful in control of robots. Robot dynamics is necessary not just for simulation and control, but also for the analysis of robot motion planners and controllers, as we'll see in chapters 9 through 11. In this book we study two approaches to solving the forward and inverse dynamics problems. The first is the Lagrangian formulation, a variational approach based on the kinetic and potential energy of the robot. The second approach is the Newton-Euler formulation, which relies on f equals m_a applied to each individual link of the robot. The focus of Chapter 8 is primarily on the Newton-Euler formulation, because it uses some of the geometric tools we have already developed, and it results in an efficient recursive algorithm for calculating the inverse dynamics. In this video, though, we start with the Lagrangian formulation, due to its conceptual simplicity. The key object in the Lagrangian formulation is the Lagrangian L. The Lagrangian for a mechanical system is its kinetic energy minus its potential energy. The potential energy P depends only on the configuration theta, while the kinetic energy K depends on theta and theta-dot. I won't derive the Lagrangian equations of motion, which you can find in many textbooks on mechanics. I'll just state the result: the vector of joint forces and torques tau is equal to the time derivative of the partial derivative of L with respect to theta-dot minus the partial derivative of L with respect to theta. The joint forces and torques tau are dual to the joint velocities theta-dot, meaning that tau dotted with theta-dot represents the power consumed or produced by the joints. We can write this vector equation in its components as shown here, where tau_i is the i-th element of the n-vector tau. Let's apply the formulation to a 2R robot in gravity. The lengths of the links are L_1 and L_2, and all the mass of the robot is concentrated in point masses m_1 and m_2 as shown. We need to calculate the kinetic and potential energy of the two-point masses, so first we calculate the position of mass_1, given by the coordinates x_1 and y_1. We can take the derivative to get the velocity of m_1. We can do the same for mass_2, deriving its position and velocity. With this information, we can calculate the kinetic energy of link_1 as one-half m_1 v_1-squared, where v_1-squared is just x_1-dot-squared plus y_1-dot-squared. Applying our earlier derivation, the kinetic energy simplifies to one-half m_1 L_1-squared theta_1-dot-squared. We can similarly calculate the kinetic energy of link_2. The potential energy of each mass depends only on its height, or its y-coordinate. Now we can calculate the Lagrangian as the sum of the kinetic energies minus the potential energies of the links, and express the joint torques in terms of the derivatives of the Lagrangian. This is tedious to do manually, but let's look at how we would calculate the derivatives for one particular component of the Lagrangian, which I'll call L_comp. The impact of this component of the Lagrangian on the torque at the second joint is tau_2comp. If we take the partial derivative of L_comp with respect to theta_2-dot, we get m_2 L_1 L_2 theta_1-dot cosine of theta_2, and if we take the time derivative of that, we get the expression you see here. Now we can subtract the partial derivative of L_comp with respect to theta_2 to get this expression. The last two terms cancel, so the final torque at joint 2 due to L_comp is m_2 L_1 L_2 theta_1-double-dot cosine theta_2. If we do these calculations for all the terms in the Lagrangian, we get these equations of motion. Even for a simple 2R robot, the equations are rather complicated. Notice that some terms are linear in the joint acceleration theta-double-dot, some terms do not depend on the joint acceleration but instead depend on a product of joint velocities, like theta_1-dot times theta_2-dot or theta_2-dot-squared, and some terms have no dependence on the joint velocities or accelerations. With this observation, we can write the vector equation of motion in this form: tau equals M of theta times theta-double-dot plus c of (theta, theta-dot) plus g of theta, where the matrix M and the vectors c and g are shown here. We call M the mass matrix. For a robot with n joints, this matrix is n-by-n, and for our 2R example it is 2-by-2. We call the vector c a velocity-product term, since it is composed of terms with a theta_i-squared or a theta_i times theta_j in it. Finally, we call the vector g the gravity term, since it depends on gravity. We call this a gravity term under the assumption that the potential energy comes only from gravity, but if there were springs at the robot joints, those springs would also contribute to the potential energy and therefore to g of theta. Overall, this equation looks like f equals m-a plus a gravity force, except that the accelerations of the masses depend not only on the joint accelerations but also products of the joint velocities. These velocity-product terms appear because the joint coordinates are not inertial coordinates. We will explore velocity-product terms in more detail in the next video. There is one more term we could add to the right-hand side, the Jacobian transpose times F_tip, where F_tip is the wrench that the end-effector applies to the environment. We learned about this term in Chapter 5. In the next video we'll take a closer look at velocity-product terms.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_333_Exponential_Coordinates_of_RigidBody_Motion.txt
In the previous videos, we learned that any instantaneous velocity of a rigid body can be represented as a twist, defined by a speed theta-dot rotating about, or translating along, a screw axis S. In this video, we integrate the vector differential equation describing the motion of a frame twisting along a constant screw axis to find the final displacement of the frame. This animation shows a screw axis S and a frame at time zero. Let's say that this configuration is coincident with the space frame {s}, so its representation is the identity matrix. Now we let the frame twist about the screw axis at a rotational speed theta-dot = 1. The animation shows the configuration achieved at times t equals pi and t equals 2 pi. These configurations are represented as transformation matrices, but an alternative representation would use 6 exponential coordinates, similar to the 3 exponential coordinates for rotation that we saw earlier. In this case, we could represent the configuration at time pi as S times pi, meaning the configuration achieved after the frame has followed the screw axis S for time pi. Similarly, we could represent the configuration at time 2 pi as S times 2 pi. Let's look at some of the analogies between rotations and full rigid-body motions. For rotations, we have a unit rotation axis omega-hat. For rigid-body motions, we have a screw axis where either the angular component is a unit vector or the angular component is zero and the linear component is a unit vector. For rotations, the exponential coordinates are omega-hat theta, where theta is the angle of rotation about the axis omega-hat. For rigid-body motions, the exponential coordinates are S-theta. If the screw axis has any angular component, theta is the angle rotated about the screw axis. If the screw axis has zero rotation, then theta is the linear distance traveled along the axis. For rotations, the matrix representation of the exponential coordinates is the 3 by 3 skew symmetric representation of omega-hat times theta. The set of all such matrices is called little so(3). For rigid-body motions, the matrix representation of the exponential coordinates is a 4 by 4 matrix in little se(3), which we learned about in the last video. For rotations, the exponential maps matrices in little so(3) to rotation matrices, and the log maps rotation matrices to little so(3). For rigid-body motions, the exponential maps matrices in little se(3) to transformation matrices, and the log maps transformation matrices to little se(3). As with the case for rotations, the matrix exponential for rigid-body motions has closed-form solutions. There are two cases to consider: one where the screw axis is a pure translation with no rotation, and one where the screw axis has rotation. For the case of a purely translational screw axis, theta refers to the linear distance traveled, and the solution is particularly simple: the orientation is unchanged, hence the identity matrix in the top left submatrix, and the new position is just the unit linear velocity of the screw axis times the distance traveled. For the case of a screw axis with rotation, again a closed-form solution exists, but it is a bit more complicated. The algorithm for the matrix logarithm involves inverting these expressions to find the matrix representation of the exponential coordinates S times theta. Now, given a body frame {b} at the configuration T_sb relative to the space frame {s}, we would like to know the final configuration T_sb-prime of the body frame if it travels a distance theta along the screw S. We could represent S in either the {s} frame or the {b} frame. If we define it in the {b} frame, the final configuration T_sb prime is T_sb times the matrix exponential. Remembering that multiplying by a transformation on the right corresponds to a transformation expressed in the frame of the second subscript. If we define the screw axis in the {s} frame, we must premultiply T_sb by the matrix exponential, since multiplication on the left means that the transformation is expressed in the frame of the first subscript. Each single-degree-of-freedom joint of a robot, such as a revolute joint, a prismatic joint, or a helical joint, has a joint axis defined by a screw axis. The matrix exponential and log will be used extensively in the study of robot kinematics, starting in Chapter 4. The next and final video of Chapter 3 covers the representation of forces and torques in three-dimensional space.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_1121_Error_Response.txt
In the previous video, we saw that the controller compares the desired behavior to the actual behavior to produce its control signals. If the control objective is motion control, then the desired behavior is given by the desired motion, theta_d of t. This is also called the reference input. The actual motion is theta of t. We define the error to be theta_e equal to theta_d minus theta. The error dynamics are the equations that describe the evolution of theta_e of the controlled system. A good controller would create error dynamics that drive any initial error to zero, or nearly zero, as quickly as possible. At least the controller should be stable, meaning that initial errors do not grow. To measure the performance of a controller, let's focus on a robot with a single joint, since the ideas generalize easily. Let's define the unit step error response as the evolution of the error when the initial error is 1. As an example, imagine that the desired angle of your elbow joint is zero, and the actual angle matches it exactly. Then suddenly you request a constant joint angle of 1 radian. At that instant, which I'll call time zero, the error is 1 radian. If the controller is a good one, over time it should reduce the error. Here is a plot of a typical error response. The controller succeeds in decreasing the initial error, but never eliminates it completely. As time grows large, the error becomes constant, and we define e_ss to be the steady-state error. We can also see that the error response overshoots its steady-state value before settling. Finally, we can judge how fast a controller responds by measuring the first time that the error comes close to its final error, say within 2 percent of the total steady-state reduction of error, and stays there for all time. We call this the settling time. Visualizing the error response with my elbow, we would get a motion something like this. My arm comes to rest with a small steady-state error. A better response would have no steady-state error, no overshoot, and a faster settling time. In summary, the error response can be characterized by its steady-state response, which refers to the final error achieved, and its transient response, which consists of the overshoot and settling time. A good controller would have zero or small steady-state error, no overshoot or oscillation, and a short settling time. Usually we approximate the error dynamics of a controlled system by linear differential equations, so in the next video we'll take a closer look at this case.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_511_Space_Jacobian.txt
In the previous video, the robot's end-effector velocity v_tip was the time derivative of a minimum set of coordinates describing the end-effector's configuration. The Jacobian J maps the joint velocities to v_tip. For this 2R robot, the Jacobian has two columns, one for each joint, which we call J_1 and J_2. Each column is the contribution to v_tip when the speed at that joint is 1 and the speed at all other joints is zero. In this video, the end-effector velocity will be represented by the twist V_s represented in the space frame {s}. We call the corresponding Jacobian the space Jacobian J_s. It also has two columns, one for each joint. Since V_s is a 6-vector and there are 2 joints, the space Jacobian is a 6 by 2 matrix. For a general open-chain robot with n joints, the space Jacobian is 6 by n. Each column of the space Jacobian is the spatial twist when that joint's velocity is 1 and the velocity at all other joints is zero. To derive the form of the space Jacobian, let's use a specific example: a 5R arm, whose joint angle are given by theta_1 through theta_5. Then the space Jacobian is 6 by 5. Let's focus on J_s3, the third column of the space Jacobian, which corresponds to the spatial twist when the velocity at joint 3 is 1 and the velocity at all other joints is zero. If all joint angles are zero, then J_s3 is simply S3, the screw axis of joint 3 when the arm is at its zero configuration. We used this in Chapter 4 for the product of exponentials formula in the {s} frame. To find the column of the space Jacobian, though, we need the spatial twist corresponding to a unit velocity at joint 3 when the robot is at an arbitrary configuration, not just the zero configuration. So let's start moving the joints of the robot and see how that affects J_s3. First we rotate joint 5. Because joint 5 is not between joint 3 and the {s} frame, the relationship between joint 3 and the {s} frame is not affected by joint 5's angle. Therefore, J_s3 is unaffected by joint 5's value, and J_s3 is still equal to S3 at this configuration of the robot. Now we rotate joint 4. Again, J_s3 is unaffected by joint 4's value. Now we rotate joint 3. Again, the configuration of joint 3 relative to the {s} frame is unaffected by this motion, so J_s3 is unaffected by joint 3's value. Now we rotate joint 2 by theta_2. Now we see that the configuration of joint 3 has moved relative to the {s} frame, so J_s3 must change. But, we've drawn a new frame {s-prime} that has the same relationship to joint 3 that the frame {s} had to joint 3 before joint 2 moved. Therefore, the twist due to a unit velocity at joint 3 in the {s-prime} frame is just S3, the spatial screw axis when the robot was at its zero configuration. The configuration of {s-prime} in the {s} frame can be written e to the bracket S2 theta_2, the displacement achieved by the {s} frame by following the screw axis of joint 2 by an angle theta_2. Now we rotate joint 1 by theta_1. Again, joint 3 moves relative to the {s} frame, so J_s3 changes. We draw a new frame {s-double-prime} where the relationship between joint 3 and {s-double-prime} is the same as the relationship between joint 3 and {s} when the robot is at its zero configuration. The frame {s-double-prime} is obtained from the frame {s-prime} by rotating it about the joint 1 axis by an angle theta_1. Because the joint 1 axis is represented by the spatial screw axis S_1, performing the transformation in the space frame corresponds to multiplying T-s-s-prime by e to the bracket S_1 theta_1 on the left, yielding this expression for the {s-double-prime} frame in the {s} frame. The reason we constructed this {s-double-prime} frame is that the screw axis of the third joint is the same in the {s-double-prime} frame as the screw axis S_3 of the third joint in the {s} frame when the arm is at its zero configuration. So, to find J_s3, we just need to express S_3, now corresponding to the screw axis in the {s-double-prime} frame, to the screw axis expressed in the {s} frame. We use our standard rule for changing the reference frame of a twist, which gives us this final expression. The same reasoning applies for any joint, not just joint 3 of this 5R robot. Joint positions of joints between the joint and the {s} frame must be taken into account, while joint positions that do not affect the relationship between the joint and the {s} frame can be ignored. We can generalize to this definition of the space Jacobian J_s. The first column of the space Jacobian is just the screw axis S1 when the robot is at is zero configuration. It does not depend on the joint positions, because no joint is between joint 1 and the {s} frame. Any other column i of the space Jacobian is given by the screw axis S_i premultiplied by the transformation that expresses the screw axis in the {s} frame for arbitrary joint positions. You can see that J_s2 depends only on the position of joint 1, J_s3 depends only on the positions of joints 1 and 2, etcetera. Notice that no differentiation is necessary to calculate the Jacobian. Also, the space Jacobian is independent of the choice of the end-effector {b} frame. In the next video we will do a similar derivation for the body Jacobian, where the end-effector twist is expressed in the end-effector frame {b}.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_104_Grid_Methods_for_Motion_Planning.txt
In this video we'll plan motions on a graph derived from a grid on the C-space. If the C-space has n-dimensions, we can subdivide each dimension into k intervals, creating a total of k-to-the-n grid cells. Each cell is represented in the graph by a single node, representing the configuration at the center of the cell. For example, for a 2R robot, if we choose k equal to 32, then there are 32-squared or 1024 grid cells. Let's draw the start and goal configuration on this grid. If we add obstacles to the scene, the corresponding C-space obstacles can be made conservative by marking every C-space grid cell they touch as an obstacle. To turn this grid into a graph, we have to decide whether the centers of the grid cells are 4-connected, meaning that an edge exists between two free grid cells if they are directly north, south, east, or west of each other, or whether the centers of the grid cells are 8-connected, meaning that neighboring free grid cells along a diagonal are also connected. With this choice, the center of each free grid cell is considered to be a node of the graph, and edges are between the 4- or 8-connected free grid cells. To find a short path, we can use A-star to search the graph. We don't have to explicitly construct the entire graph in advance; we can construct it as we search, using the 4-connected or 8-connected rule. The optimistic cost-to-go is just the shortest straight-line path through joint space, accounting for the wraparound at 0 and 2pi for each joint. If the cells are 4-connected, this is an optimal path. This optimal path is not unique; other paths with the same path length also exist. If the start and goal configurations are in free grid cells, but not at the centers, then the first path segment is from the start configuration to the center of its grid cell, and the last path segment is from the center of a grid cell to the goal configuration. If there are constraints on the robot's motion, such as for a car that can't move sideways, or if the robot is dynamic and the controls are forces, not velocities, then grid-based methods must be modified, as described in the book. But, the simple grid-based path planner I just described can be applied to any fully actuated kinematic system where the controls are velocities. Because of the discretization of the C-space, the grid-based planner is not complete, but it is resolution complete, meaning that it will find a path if one exists at the level of discretization chosen. The solution path is optimal for the underlying graph using A-star search. But, a major drawback is that this approach is not practical for high-dimensional spaces. The amount of memory needed to represent the grid, and the time to search the graph, grows quickly with the number of degrees of freedom. This problem can be mitigated, to some extent, by using multi-resolution grids. The key idea here is not to choose the discretization level k in advance, but to represent the free C-space coarsely in wide-open regions, and to use a finer resolution where the C-space is cluttered. This should keep the representation of the C-space relatively small, while still allowing representation of narrow passages of free space. As an example, imagine this dark gray box is a cell of the C-space, and the lighter box is a C-space obstacle. If we used a fixed resolution to represent the C-space, then this whole cell would be marked as in collision. If we subdivided the cell into 4 cells, then only one of them would be in collision. We can subdivide again, then subdivide again, and in the end the original C-space cell is represented by a tree that looks like this. The 10 leaves of the tree are the final cells in our representation, and 2 of those cells, colored gray, are in collision. If we had used a fixed resolution grid, we would have needed 64 cells to achieve the same level of resolution. This kind of representation of a 2-dimensional C-space is called a quadtree, since cells subdivide into 4. In 3-dimensional space, cells subdivide into 8, and the representation is called an octree. In the next video, we will see a different way to generate a graph representing the C-space, based on random sampling.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_123_Transport_of_an_Assembly.txt
Here's a planar assembly of blocks on a table. We'd like to know if this assembly will stand or fall. To test if standing is a possible solution, for each block we can write a vector static balance equation, where F_ext is the external force acting on the block, in this case gravity, and the contact wrenches on each block must balance the gravitational force. The F_i are the wrenches corresponding to friction cone edges, and the k_i are nonnegative coefficients. For the left block, we can write the static balance equation as shown here. There are 8 friction cone edges acting on the left block: F_1 through F_4 from the table and F_5 through F_8 from the top block. The total contact wrench is in the positive span of these 8 wrenches. For the right block, we can write another vector static balance equation. The 8 friction cone edges acting on the right block are labeled F_9 through F_16. Finally, for the keystone block at the top of the arch, the 8 friction cone edges acting on it are minus F_5 through minus F_12. Since the keystone block must apply a wrench to the left block that is opposite the wrench that the left block applies to the keystone, the coefficients k_5 through k_8 are the same as those we used in our analysis of the left block. Similarly, the coefficients k_9 through k_12 are the same as those we used in our analysis of the right block. Counting the coefficients and the constraints, we have 16 nonnegative coefficients to satisfy 9 wrench-balance equations. If a linear constraint satisfaction solver finds a set of nonnegative coefficients satisfying the equations, then standing up is a feasible solution for the arch. Recall the problem of transporting a waiter's tray from the beginning of this chapter. We could formulate a dynamic version of the arch stability problem, and ask if the arch stays standing as its support surface moves. In this case, the equation for each rigid body is written as you see here. If we plug in the twist V and the acceleration V-dot of the tray and we can still find positive coefficients k_i satisfying the 9 equations, then the assembly can stay assembled during the motion. If any of the coefficients k_i has to become negative to satisfy the equations, then the assembly must be glued together to keep from collapsing. If the assembly were in 3 dimensions, nothing changes about the analysis except that we would approximate the quadratic friction cones as polyhedral cones. This concludes Chapter 12 on grasping and manipulation. In this chapter you learned about the kinematics of contact constraints, the forces that can be applied through a contact, and how to use this information to analyze different kinds of manipulation problems.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_123_Manipulation_and_the_MeterStick_Trick.txt
This is the equation of motion for a single rigid body subject to frictional contacts. The right-hand side is the rigid-body dynamics we derived in Chapter 8. F_contact is the total wrench from all of the contacts, and F_ext is the wrench due to gravity or other forces. The procedure to analyze a rigid-body mechanics problem with friction is as follows. Given the state of the body and manipulator and manipulator motions or forces, first enumerate the potential contact modes that could hold at this instant, and for each contact mode, determine whether there is a contact wrench and body motion, consistent with Coulomb's law and the contact mode, satisfying the equation of motion. If the acceleration and velocity-product terms are negligible, we can replace the right-hand side with a zero, meaning that there is always force balance between the external wrench and the contact wrench. The assumption of force balance is called the quasistatic assumption. This procedure may sound strange. We don't simply specify the states and controls and solve some equations for the change of state. Instead, we have to test different contact modes for a possible solution. It is not hard to show that this procedure sometimes tells us more than one contact mode is possible. Conversely, for some problems there may be no solution at all. This is one of the prices we pay to use the rigid-body and Coulomb friction assumptions. Fortunately, for many realistic problems there is a single consistent solution. Let's return to the meter-stick trick from the beginning of this chapter. We balance the meter stick on two fingers, with one finger close to the center of mass. If we move this finger slowly toward the other, the stick doesn't fall; instead, it slides to keep its center of mass between the fingers. Let's use the procedure I just described to prove this. I'll assume that motion is slow so the quasistatic assumption is satisfied. Here's an image of the stick balanced on two fingers, with the friction cones of the two fingers illustrated. To balance gravity, the fingers must create a contact force mg upward through the center of mass. To check if the fingers can create this force when they are stationary, we assign the contact label R to each finger and use moment-labeling to find a graphical representation of the composite wrench cone from the two contacts. Because the upward wrench mg creates negative moment about all points labeled minus and positive moment about all points labeled plus, it can be generated by the two fingers. Therefore, the stick can stay at rest on the stationary fingers. In general, each of the two contacts could be breaking, sliding left, sliding right, or rolling, for a total of 16 possible contact modes between the fingers and the stick. Some of these contact modes are not possible kinematically. For example, the contact mode RR is not possible; there is no way for the stick to remain stationary relative to both fingers as the fingers move toward each other. For the other contact modes, which are not ruled out solely because of kinematics, we have to undertake a quasistatic force analysis to see if they are possible solutions. As a first example, let's consider the contact mode where both contacts break, the contact mode called BB. Because the contacts are breaking, no forces can be applied, so the entire plane gets the moment label plus-minus. The required force mg cannot be generated by the two contacts, so we reach the obvious conclusion that the stick does not simply float away. Now assume that there is no sliding at the left finger but the right finger slides. Then the left finger can apply any force in its friction cone, while the right finger can only apply forces on the left edge of its cone, as dictated by Coulomb's law. The wrenches that can be generated by the contacts are indicated by the moment labels. Since the upward force mg passes through the region labeled minus, the fingers cannot balance gravity, and this contact mode is not possible. If both fingers slide, as shown here, each contact force lies on the inner edge of its friction cone. Again, the moment labels show that the fingers cannot quasistatically balance gravity. Finally, if the left finger slides but the right finger does not, the moment labels show us that the fingers can generate a wrench to balance gravity. Therefore, this contact mode satisfies quasistatic balance and is a feasible solution. Technically, we should check that no other modes are possible and that this is a unique solution. So this explains why the center of mass always stays between the two fingers. Our analysis would show that once the center of mass is centered between the two fingers, then both contacts slide at equal speed until both fingers are directly under the center of mass. In practice, you can see slipping starting and stopping at each finger; this can be explained by a static friction coefficient that is larger than the kinetic friction coefficient.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_94_TimeOptimal_Time_Scaling_Part_3_of_3.txt
In the previous video we learned a graphical interpretation of the time-optimal time scaling problem. Basically, we try to keep the speed s-dot as high as possible while remaining within the motion cone dictated by the robot's actuators. In some cases, the time-optimal solution is given by a bang-bang trajectory, where the robot maximally accelerates then maximally decelerates. In other cases, though, the velocity limit curve prevents a simple solution like this. In this final video of Chapter 9, we develop an algorithm to handle this case. The first step of the algorithm is initialization, which you can find in the book. In step 2, we integrate the minimum acceleration L backward from the end state until either we reach s equal to zero or the motion cone disappears at the velocity limit curve. In the figure shown here, we reach the velocity limit curve. In step 3, we integrate the maximum acceleration forward from the initial state until we intersect either the final segment or the velocity limit curve. If we intersect the final segment, then we're finished: we've found the optimal time scaling. In the figure, we intersect the velocity limit curve at (s_lim, s_lim-dot). We know we have to slow down at some point before this intersection, to keep from running into the velocity limit curve. Since time-optimal trajectories consist of only maximum acceleration U and minimum acceleration L, we need to find a point where we switch from U to L. One way to find the switch point is to decrease the velocity s-dot from the point where we hit the velocity limit curve, and to integrate forward the minimum acceleration L. Depending on how much we decrease s-dot, the forward integration will either hit the s-axis, as it does with our first two guesses; or it will intersect the velocity limit curve again, as with our third guess; or it will just touch the velocity limit curve tangentially. It is this tangential point we are looking for, and we'll call this point (s_tan, s_tan-dot). In step 5, we integrate the minimum acceleration L backward from the tangent point until it intersects the previous U segment. This is the switch point s_1, from maximum acceleration to minimum acceleration. States in the region shaded red would eventually collide with the velocity limit curve, so they have to be avoided. In step 6, we mark the tangent point as the switch point s_2, where we switch again from minimum acceleration L to maximum acceleration U. We then go back to step 3, where we integrate U forward again. This segment will either intersect the velocity limit curve again, in which case we repeat the process just described, or it will intersect the final L segment, and the algorithm is complete. In the figure shown, the algorithm completes with three switching points, and the time-optimal time scaling consists of maximum acceleration until s_1, followed by minimum acceleration until s_2, followed by maximum acceleration until s_3, followed by minimum acceleration to bring the robot to rest. This time scaling keeps the velocity as high as possible at all points on the path while assuring the trajectory is feasible for the actuators. A key step in this algorithm is step 4, finding the next tangent point on the velocity limit curve. Instead of using a binary search guess-and-check approach, a more computationally efficient approach is to numerically construct the velocity limit curve and search for the next point where the motion ray is tangent to the curve. Clearly, at the point of intersection the motion ray is into the region of inadmissible states. As we search along the velocity limit curve, we eventually find a point on the curve where the motion ray is tangent to the limit curve. We can now proceed with the algorithm as before. There are some other technical details, special cases, and improvements to the algorithm, some described in the book, but the description I've given covers the most important points. Because the time-optimal time scaling requires one or more actuators to operate at maximum capacity at all times, and because the dynamic model is never exactly known, this algorithm is rarely directly used in trajectory generation. Nonetheless, it provides a deep theoretical understanding of the maximum capability of a robot. So this concludes Chapter 9. You now understand the basics of how to generate trajectories for robots, and how the dynamic equations of Chapter 8 can be used to find time-optimal trajectories along specified paths. In Chapter 10 we will study the problem of planning motions among obstacles.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_1024_Graph_Search.txt
It is common to represent the free C-space as a graph, where the nodes represent configurations and the edges represent free paths between the configurations. Once we have a graph, we need to search it to find a path between the start configuration and a goal configuration. The graph shown here has six nodes. Let's imagine that the edges connecting them are roads, and the roads may be straight or they may be winding. We'd like to find a path connecting the start and the goal that minimizes the cost, which is the total path length in this case. We draw the roads as straight edges, but we assign each edge a weight indicating the length, or cost, of the road. For example, going from node 1 to node 3 has a cost of 18. We now have a weighted undirected graph. In this video I'll focus on an undirected graph, but it's easy to generalize graph search to directed graphs. For this graph, the shortest path has a cost of 30 and goes from node 1 to node 4 to node 5 to node 6. To find the lowest-cost path, we will use A-star graph search, one of the most useful graph-search algorithms. A-star is used to find lowest-cost paths on a graph where the path cost is the sum of the positive costs of the individual edges. In addition to the graph, A-star search requires a function that calculates an optimistic cost-to-go from any node to a goal node. An estimate of the cost-to-go is optimistic if the actual cost-to-go can never be less than the optimistic estimate. The function that calculates the optimistic cost-to-go should satisfy two criteria: first, it should be fast to evaluate, and second, the estimated cost-to-go should be reasonably close to the actual cost-to-go. A function that always estimates zero cost-to-go satisfies the fast-to-evaluate criterion but fails the criterion of providing a good estimate. A function that exactly calculates the cost-to-go by searching the graph fails the fast-to-evaluate criterion. For our example, a good choice for the optimistic cost-to-go function is one that calculates the straight-line distance to the goal, as the bird flies. This is fast to evaluate and guaranteed to be optimistic. For node 1 of our example, the optimistic cost-to-go is 20, and for nodes 2 through 5 the optimistic cost-to-go is 10. With this as background, we can begin the search process. Let's create a table to keep track of our progress. The columns of the table correspond to the nodes 1 through 6. First, the "past cost" refers to the cost of the best known path to this node. Since node 1 is the start node, the past cost is zero; it doesn't cost us anything to get to node 1. The past cost for all the other nodes is infinity, since we don't know yet if there is a path to any of these nodes. Next, the optimistic cost-to-go is 20 for node 1, 10 for nodes 2 through 5, and zero for node 6, since it is the goal configuration. Next, we define the "estimated total cost" to be the estimated cost of the best solution path that passes through that node. The estimated total cost is the sum of the past cost plus the optimistic cost-to-go. The estimated total cost is 20 for node 1 and infinity for all the other nodes. Finally, the "parent node" of node i is the previous node on the best known path to node i. Node 1 does not have a parent node, and we don't know of any paths to any of the other nodes yet. Now we define two lists: OPEN, which is a list of nodes to explore from, and CLOSED, a list of nodes we have already explored from. We initialize the list OPEN with node 1, the start node. We also indicate its estimated total cost, which is 20. Now we begin the search by exploring from the first node in OPEN. We will explore all edges leading away from node 1. The first edge goes to node 3. Currently node 3 has no parent node, so we update that to indicate that node 3 can be reached from node 1. We also see that the cost to reach node 3 is 18, so we update the past cost to 18. This means that the new estimated total cost is 28. Now we add node 3 to the list OPEN, and we insert it in the list in sorted order according to its estimated total cost, 28. Next we take the edge from node 1 to node 4, and we update node 4 to indicate its parent is node 1, its past cost is 12, and its estimated total cost is 22. We now insert node 4 into OPEN, and it goes before node 3 because its estimated total cost is lower. Finally, we take the edge from node 1 to node 5, and we update node 5 to make its parent node 1, its past cost 30, and its estimated total cost 40. Then we insert node 5 into the sorted list OPEN. We are done exploring from node 1, so we move node 1 to the list CLOSED. This means we never have to revisit node 1. Now the first node on the OPEN list, the node with the lowest estimated total cost, is node 4, so we mark it for exploration. Node 4 connects to nodes 1, 5, and 6, but we don't have to consider node 1, since it's CLOSED. So let's take the edge to node 5. Currently, node 5 indicates that its parent is node 1 and its past cost is 30. But the cost of the path from node 4 is only 20, the sum of the past cost to node 4 plus the cost of the edge from node 4 to 5. Therefore, the new best path to node 5 goes through node 4 and has a past cost of 20. We update node 5's information so the past cost is 20, the estimated total cost is 30, and the parent node is 4. We also reflect that change in node 5's information in the OPEN list. Next we take the edge from node 4 to node 6. We update node 6's information to reflect the past cost and estimated cost of 32 and the parent node 4, and we add node 6 to OPEN. Even though node 6 is the goal configuration, our search is not done; we might still find a lower-cost path to node 6 in the future. We are now done exploring from node 4, so we move it to CLOSED. Next we explore from node 3. First we take the edge to node 2, we update node 2's information, and we insert node 2 in the sorted list OPEN. Now we take the edge to node 6, and we see that the cost of the path through node 3 is 18 plus 15 or 33, higher than the past cost of an already known path to node 6. Therefore we can ignore this edge. Now we are done with node 3, so we move it to CLOSED, and we mark node 5 for exploration. The only node it is connected to that is not in CLOSED is node 6, so let's explore that edge. The past cost to node 6 by a path passing through node 5 is the sum of node 5's past cost, which is 20, plus the cost of the edge from node 5 to node 6, which is 10. The new past cost is 30, which is less than 32, so we update node 6's information. Its past cost and estimated total cost is now 30, and its parent node is 5. We also update node 6's information in OPEN. We are done exploring from node 5, so we move it to CLOSED. Now the first node in OPEN is node 6, which is the goal configuration. Because of the additive nature of costs, this goal configuration cannot be reached by a lower cost path that we will find in the future. Therefore the search is done. We can reconstruct the optimal path by going back to node 6's parent, node 5; then to node 5's parent, node 4; then to node 4's parent, node 1. The path shown in green is the optimal path through the graph. To fully understand this algorithm, you may need to spend some time studying it in the book, or better yet, you should implement it. The algorithm can be summarized in this pseudocode. First, we initialize the algorithm, then enter a while loop, where we mark the first node in the OPEN list for exploration. If this node is in the goal region, then the algorithm has succeeded in finding an optimal solution, and the algorithm exits. If not, then the algorithm explores each neighbor in the graph that is not already CLOSED. For each neighbor node, the algorithm checks to see if it has found a new best path to that node, and if so, it updates the node's past cost, estimated total cost, parent, and position in the OPEN list. If the OPEN list ever becomes empty, then there is no solution to the motion planning problem. A variation on this algorithm is to always choose the optimistic cost-to-go to be zero. Then the past cost and the estimated total cost are equal. This algorithm is called Dijkstra's algorithm, which preceded A-star historically. Dijkstra's algorithm also finds optimal paths, but it's typically much slower than A-star, because it doesn't use a heuristic cost-to-go to help guide the search. So you should use a reasonable optimistic estimate if you can. If you make a mistake and your optimistic cost-to-go function actually returns an estimate greater than the actual cost-to-go, A-star may terminate with a solution that is not optimal.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_106_Virtual_Potential_Fields.txt
The motion planners we have seen so far are based on offline search. In this video we introduce a reactive real-time controller based on virtual potential fields defined on the robot's C-space. One potential field pulls the robot's configuration toward the goal configuration, while obstacle potential fields repel the robot from C-space obstacles. An attractive potential field on the C-space can be described by this quadratic in the difference between the actual configuration and the goal configuration, where the K matrix is positive definite. From physics, or from the Lagrangian approach to dynamics from Chapter 8, we know that the force due to a potential field is the negative of the gradient of the potential with respect to the configuration q. Taking the gradient of our artificial potential field, the force on the robot is proportional to the vector from the current configuration to the goal configuration, as you would get from a spring. We can plot the potential on a two-dimensional C-space as a quadratic bowl, where the z-coordinate is the potential. The left figure shows the 3D plot of the quadratic bowl, while the right plot shows the equipotential contour plot on the 2D C-space. The goal configuration is at the bottom of the bowl and the initial configuration is labeled q. The basic control law says that we apply a force to the robot that is equal to the gradient of the potential, and the robot's motion evolves according to its dynamics. If we assume that the robot's mass matrix is the identity matrix, the robot moves as a ball rolling in the bowl, as shown in this simulation, where the robot begins with a nonzero initial velocity. Since the dynamics are essentially those of a mass pulled by a spring, the robot's total energy, potential plus kinetic, remains constant, and it never comes to rest at the goal configuration. To fix this, we can add damping to the control law, where B is a positive-definite matrix. This damping subtracts energy from the robot, allowing it to settle at the goal configuration. Finally, we could use an even simpler control law, where we directly control the robot's velocity to be equal to the force calculated from the potential. Under this control law, the robot moves directly to the goal. To allow for obstacles in the environment, we define a repulsive obstacle potential. This repulsive potential requires a distance function between the C-space obstacle B and the configuration q. This distance is zero when the robot is in contact with the obstacle and positive when the robot is not in contact with the obstacle. The potential is proportional to one over the distance squared, so the potential is large when the robot is near the obstacle. The force due to the obstacle is the negative gradient, and it points in the direction in which the distance between q and the C-space obstacle grows the fastest. The total force acting on the robot is the sum of the force attracting the robot to the goal and the forces repelling the robot from the obstacles. This 2-dimensional C-space has three obstacles and the goal configuration is at the center. The sum of the attractive and repulsive potentials, capped at a maximum potential value, is shown in this figure. The equipotential contour plot is shown in this figure, with the goal configuration indicated by a plus sign. The potential field has a unique global minimum at the yellow X. Ideally this global minimum would be at the goal configuration, but the obstacles have pushed the minimum away from the bottom of the attractive quadratic bowl. The potential field also has saddle points, where the field is at a minimum in one direction and a maximum in another direction. There is also one local minimum, and this is a problem, as the local minimum attracts all points in this basis of attraction. If we simulate the robot's motion from two different initial configurations, q_1 and q_2, we see that one motion gets stuck at the local minimum while the other finds its way to the global minimum. Artificial potential fields are appropriate for real-time control as they can be evaluated relatively quickly, but local minima are a significant problem. A particular class of potential functions, called navigation functions, are guaranteed to have no local minima, but we only know how to compute navigation functions for a limited class of systems. We have not addressed the details of how to calculate the repulsive force due to an obstacle, since it may not be easy to explicitly calculate a distance between a configuration and a C-space obstacle. One option is to attach a finite set of control points to the robot, and for each of these control points, to calculate the closest point on each obstacle, and therefore the obstacle's linear repulsive force. Each repulsive force can be transformed to joint torques according to the Jacobian transpose corresponding to the location of the point on the robot.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_3_Introduction_to_RigidBody_Motions.txt
In Chapter 3, we learn representations of configurations, velocities, and forces that we'll use throughout the rest of the book. As discussed in the last chapter, we'll use implicit representations of configurations, considering the C-space as a surface embedded in a higher-dimensional space. In other words, our representation of a configuration will not use a minimum set of coordinates, and velocities will not be the time derivative of coordinates. This approach may be new to you if you haven't taken a course in three-dimensional kinematics before. Rigid-body configurations are represented using frames. A frame consists of an origin and orthogonal x, y, and z coordinate axes. All frames are right-handed, which means that the cross product of the x and y axes creates the z-axis. You can create a right-handed frame using your right hand: your index finger is the x-axis, your middle finger is the y-axis, and your thumb is the z-axis. If I want to represent the position and orientation of a body in space, I fix a frame to the body and fix a frame in space. The configuration of the body is given by the position of the origin of the body frame and the directions of the coordinate axes of the body frame, expressed in the space-frame coordinates. In this book, all frames are considered to be stationary. Even if the body is moving, when we talk about the body frame, we mean the stationary frame coincident with the frame attached to the body at a particular instant in time. Positive rotation about an axis is defined by the right-hand rule. If you align the thumb of your right hand with the axis of rotation, positive rotation is the direction that your fingers curl. With those preliminaries out of the way, in the next video we move on to representing the orientation of a rigid body.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_115_Force_Control.txt
The goal of force control is to apply a desired end-effector wrench to the environment. Conceptually, you should imagine the end-effector is encased in a concrete wall, so that it can create a wrench in any direction. This is the equation of motion for a robot applying an end-effector wrench F_tip. Typically in force control tasks the robot is stationary or only moving very slowly, so we can eliminate the terms that depend on velocity or acceleration. Our force-and-torque-balance equation is then tau equals g plus J-transpose F_tip. If our model of the gravitational forces and torques is g-tilde and the desired wrench is F_d, this is a reasonable force control law. The robot applies the joint forces and torques needed to balance gravity plus the added forces and torques needed to generate the desired endpoint wrench. The only feedback needed to implement this control law is joint angle feedback, to calculate the Jacobian-transpose and the gravity model. This is force control without end-effector force feedback. To improve force control, we could equip the robot with a force-torque sensor at its end-effector, as you see in this photo. A six-axis force-torque sensor measures the end-effector wrench. With this wrench feedback, we can replace the feedforward wrench command F_d with the sum of the feedforward wrench, plus a proportional gain times the wrench error, where the wrench error is defined as F_d minus the actual wrench, plus an integral gain times the integral of the wrench error. This is PI force feedback control with a feedforward term and gravity compensation. The PI feedback controller theoretically allows the elimination of steady-state wrench error if there is a constant wrench disturbance, as might occur if there is modeling error in the gravity compensation. A derivative term is not typically used in a force controller for several reasons. First, in our simple rigid-body modeling of the robot, the lack of dynamics between the joint forces and torques and the wrench at the end-effector does not support the use of a derivative term. Second, force-torque sensors are typically rather noisy devices. Taking the derivative of a noisy measurement, as would be required to calculate a derivative term, only amplifies the noise. In practice, force-torque readings are typically low-pass filtered to try to average out some of the sensor noise. In the next video, we combine motion control and force control to get hybrid motion-force control.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_6_Inverse_Kinematics_of_Open_Chains.txt
In Chapter 4, we studied the forward kinematics of open chains: given the joint configuration theta, find the configuration of the end-effector frame {b} relative to the space frame {s}. In this chapter, we study the inverse kinematics problem: given a desired end-effector configuration, find joint positions that achieve it. This is obviously an important problem, since we have to control the end-effector's configuration for it to purposefully interact with the world. Inverse kinematics is trickier than forward kinematics. Unlike the forward kinematics, which has a unique end-effector configuration for a given set of joint values, the inverse kinematics problem may have zero, one, or multiple solutions for the joint values theta given the desired end-effector configuration. We'll see an example of this in a moment. There are two approaches to solving the inverse kinematics problem: first, in some cases we can find analytic closed-form solutions to the nonlinear equations. These solutions typically take advantage of geometric insight into the problem and the particular structure of the robot. For arbitrary robot kinematics, however, analytic solutions may not exist, so a second approach is to use an iterative numerical method. This approach requires an initial guess at a solution, then iteratively drives the initial guess toward a solution. Unlike analytic methods, this approach requires an initial guess and will only find one solution, not all possible solutions, but it applies to robots with arbitrary kinematics. In this video we will analytically solve the inverse kinematics for a planar 2R robot. For this example, and indeed many of inverse kinematics problems in robotics, it is useful to define the two-argument arctangent function. The atan2 function takes the x and y coordinates of a point in the plane and returns the angle of a vector from the origin to the point relative to the x-axis. Another useful tool is the law of cosines. If a, b, and c are the lengths of the sides of a triangle, and capital C is the interior angle opposite side c, then the length of edge c is given by c-squared equals a-squared plus b-squared minus 2ab cosine capital C. For a planar 2R robot, the inverse kinematics problem is to find the joint angles theta_1 and theta_2 such that the tip of the robot is at the point (x,y). The workspace of the tip of the robot is bounded by circles: the inner circle has radius L_1 minus L_2, and the outer circle has radius L_1 plus L_2. If we request a tip position (x,y) outside the workspace, there are no inverse kinematics solutions. If we request a tip position on the boundary of the workspace, there is one solution: theta_2 is pi on the inner boundary and theta_2 is zero on the outer boundary. If we request a tip position in the interior of the workspace, then there are two solutions, as shown in the figure. For the solution shown on the right, here, gamma is the angle from the x-axis to the tip and alpha and beta are interior angles of the triangle formed by link_1, link_2, and the line from tip to joint_1. Gamma is determined from the atan2 function and alpha and beta are determined from the law of cosines. With these, the two solutions to the inverse kinematics for points in the interior of the workspace are shown here. The book discusses other examples of inverse kinematics, particularly for robots with 6 joints, and the atan2 function and law of cosines are useful in those examples, too. In the next video we study numerical inverse kinematics, which is useful for cases where no closed-form analytic solution exists.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_101_Overview_of_Motion_Planning.txt
Chapter 10 focuses on robot motion planning, particularly in the case of obstacles in the environment. For example, in this video, a motion has been planned for the robot arm to move its end-effector from one frame to another, without hitting any obstacles in the environment. Since some versions of the path planning problem have been called the piano-mover's problem, here's an animation of a piano being maneuvered through a tight space. In this chapter, the configuration of a robot is described by a vector of n coordinates q, and the robot's C-space is denoted C, a subset of R^n. More generally, the C-space of the robot could be an arbitrary manifold, like SE(3) for a rigid body, but we will focus on configurations explicitly parametrized by a minimum set of coordinates. The C-space is the union of C_free, the set of configurations where the robot does not contact any obstacle, and C_obs, the set of configurations where the robot is in collision with an obstacle. The state of the robot is x, where x could simply be the configuration q, if the robot's control inputs are considered to be velocities, or it could be the configuration plus the velocity, if the control inputs are considered to be accelerations, or forces. The state space is capital X. The equations of motion are x-dot equals f of (x,u), where the control u is an element of the feasible control set capital U. The integral form of the equations of motion is shown here. With these definitions, a fairly general statement of the motion planning problem is: "Given an initial state x_start and a desired final state x_goal, find a time T and a set of controls such that the motion satisfies x-of-T equals x_goal and is collision free." We assume that we have perfect models of the robot and the environment. There are a number of possible variations on the motion-planning problem: we could plan a full trajectory, with timing information, or just a collision-free geometric path. The robot could have an actuator for every degree of freedom, like most robot arms, or it could have fewer actuators than degrees of freedom, like a car with only two control inputs, forward-backward speed and turning speed. The planner might have to make changes in real time as obstacles move in the environment, or it could be allowed to do its work in advance of the robot motion. We could ask for minimum-time, minimum-length, or otherwise minimum-cost motions, or we could be satisfied with any collision-free motion that reaches the goal. Finally, we might require motions that go exactly to a goal state or we could be satisfied with a final state anywhere in a goal region. Apart from the characteristics of the motion planning problem, we can define the following properties of a motion planner: The planner may be designed for multiple queries or a single query. A multiple-query planner is one that invests time in developing a good representation of C-space, so that future motion-planning problems in that space can be solved quickly. If the C-space changes often, however, a single-query planner attempts to find the solution to a single motion-planning problem as quickly as possible. We say that a planner is complete if it always finds a solution when one exists. A weaker version of this notion is resolution completeness. A planner is resolution complete if it always finds a solution when one exists at the level of discretization employed in the representation of the problem. There is also probabilistic completeness. A planner is probabilistically complete if the chances of it finding a solution, if one exists, goes to 100% as the planning time goes to infinity. Finally, the computational complexity of a planner refers to how much memory a planner will use, or how long the planner will take to execute, in either the average or the worst case, as a function of the description of the planning problem, such as the number of degrees of freedom of the robot or the number of vertices used to represent obstacles. Before describing specific planners, in the next few videos I will introduce foundational concepts in motion planning, such as C-space obstacles, graphs and trees, and graph search.
Modern_Robotics_All_Videos
Modern_Robotics_Chapter_85_Forward_Dynamics_of_Open_Chains.txt
In the last video we derived the recursive Newton-Euler inverse dynamics algorithm for open chains. In this video we address the forward dynamics, which solves for theta-double-dot given the joint forces and torques tau, the joint positions and velocities, and optionally an end-effector wrench F_tip. We can solve the forward dynamics using the inverse dynamics algorithm. First, we use the inverse dynamics to calculate the joint forces and torques if the joint accelerations theta-double-dot are zero. This gives us the Coriolis terms, the gravity terms, and the end-effector wrench terms of the joint forces and torques. Next, we use the inverse dynamics to solve for the mass matrix M of theta. To do this, we call the inverse dynamics algorithm n times, once for each joint, and each time we set gravity, the end-effector wrench, the joint velocities, and all joint accelerations except one, equal to zero. We set the i'th joint acceleration to be one. Then the joint torque vector tau found by the inverse dynamics algorithm is the same as the i'th column of the mass matrix M. By calling the inverse dynamics algorithm n times, we can construct the mass matrix M. Now referring back to the original problem statement, by calling the inverse dynamics algorithm n-plus-1 times, we have the mass matrix M of theta as well as c of theta, theta-dot, g of theta, and J-transpose times F_tip. We are given tau, so we just need to solve an equation of the form M times theta-double-dot equals a known vector. We can use any efficient algorithm to solve this for theta-double-dot. The forward dynamics can be numerically integrated to simulate the motion of a robot. At each timestep, you use the forward dynamics to calculate the joint accelerations, then use the accelerations and the current joint positions and velocities to calculate the joint positions and velocities at the next timestep. Let's use the forward dynamics to simulate the motion of this 6R arm as it falls in gravity with zero joint torques applied by the motors. The motion may look somewhat unrealistic, because we have not modeled friction in the joints. There are many approximate models of friction, and you can add your favorite model of friction torque, replacing the zero joint torques by joint torques that depend on the joint velocities. One advantage of having zero friction and zero joint torques, however, is that we know that no energy is dissipated. Therefore, the total energy of the robot, the kinetic energy plus the potential energy, must be conserved. This gives us one test of whether our simulation is working properly. Let's watch the simulation one more time. Notice that the arm swings to approximately the same maximum height each time, indicating that the potential energy at the end of each swing is approximately the same. In fact, if we plotted the sum of the kinetic and potential energy as a function of time, we would see it is nearly constant. This is one indication that our simulator is working properly. Because of numerical integration drift, the total energy will slowly change with time, but this effect can be mitigated by using smaller integration timesteps or more complex numerical integration procedures. Now that we can derive the forward and inverse dynamics of an open-chain robot, in the last few videos of Chapter 8 we address advanced concepts, such as dynamics in task space, dynamics of robots subject to constraints such as loop-closure constraints, and dynamics considering the details of geared motors driving the joints.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
6_Atoms_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. | make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good afternoon. So last week, we started our discussion of atoms. So these are, of course, the key players in a course of atomic physics. And we will reveal the structure of atoms going from the larger energies to the smaller energies. That means we start with electronic energies mainly determined by Coulomb energy. And then we go to finer levels which are fine structure, hyperfine structure, Lamb shift, and all that. So we have started last week to discuss the Schrodinger equation, discuss hydrogenic energy levels, and I pointed out some important results on typical length scales and the scaling of the wave function. We will meet that later today. But before I continue with atomic structure, I want to start today to discuss the units. The atomic units we are using to describe the atom. So the nat-- and for every problem in physics, you have what one may call natural units. And for us these are also called the atomic units. So what are atomic units? Well, atomic units are the units for length, for energy, for velocity, for electric field. And all of these units, you should be able to construct them out of fundamental constants. The fundamental constants which appear in the Schrodinger equation for the electron within the atom are the charge of the electron, the mass of the electron, and h-bar. Now, is there any other fundamental constant we should include here to construct our natural units? c, good question. What about c? The speed of light. Should the speed of light be part of our system of atomic units? Let's not go there. [LAUGHTER] Because if you set it to one, you have made a choice. You have constrained your system. And you're almost obscuring the fact whether it should appear or not. Well, the strong message I want to give you is at the level of electronic structure, at the level of the Schrodinger equation, it should not be there. Because we are talking at this point about solutions of the Schrodinger equation, the hydrogenic levels. And there is no c, no speed of light in the Schrodinger equation. So c is not part of the fundamental constants we have to consider now. It will later come in the fine-structure constant, but this is a different story. So we have three units, e, m, and c. And you can just play the game of combinatorics and see can you find a length which consists of those three units. And, well, it's h squared, m over e squared. This is how you get the unit of length. And this length is called the Bohr radius. And indeed, this is sort of the RMS size of the electron in the 1s state. You can play the game again and ask, can we construct an energy? Well, you find that if you take e to the four, if you take the mass and finally divide by h square, then you have a unit of energy. And this unit of energy turns out to be one Hartree or 27.2 electron volts or two Rydbergs, twice the binding energy of the electron in the 1s state. And we had some discussion last week that the factor of two reflects the virial theorem. This is actually that one Hartree is the Coulomb energy of the electron in the 1s state, which is the binding energy. But then half of it is kinetic energy. And, therefore, the total energy of the 1s electron is half the Coulomb energy. And it's that. But no. OK, so, so far no c. The energy, the energy levels, the wave function. If there is no c, no speed of light in the Schrodinger equation, there is no c in the solution of the Schrodinger equation. And if you set it 1, sure, it wouldn't-- I mean, if your relativity equation said c equals 1, you obscure the fact. But here, it's definitely not there. But now we can also see, well, there are other important energies. One energy, and now I bring in the c just because I want to compare two energies which include the speed of light, the rest energy of the electron. Or a very fundamental unit of length is h-bar over mc. Which is the Compton wavelengths. So that's lambda Compton, the Compton wavelengths of the electron. And, well, if we try to figure out what is the ratio of the atomic unit of lengths. And the Compton radius, we have to multiply with a dimensionless unit which is hc over e squared. Or if I take the reverse, e squared over hc, h-bar c. And similarly, here the dimensionless quantity to multiply is that. So what do we find now here is we find that what we get is a quantity which I want to call alpha, the fine-structure constant. And what I find here is the same constant, alpha to the minus 1, times the Compton wavelengths. Let me discuss the fine-structure constant in a second, but there's still two more atomic units we want to discuss. There is the velocity and there is the electric field. I can simply get the velocity by saying, well, the velocity enters the kinetic energy in mv square. And if I said mv square, I want to skip all factors of unity. So that's not one half, it's just mv square. And if I said mv square equal to one Hartree, then I find that the atomic unit of velocity is e squared over h-bar. But this turns out to be alpha times c. So again, what we find is alpha, the fine-structure constant. Alpha is, of course, dimensionless. It is 1/137. So therefore, we see that if the velocity-- and this is actually the orbital velocity of the 1s electron. If this is alpha times c that confirms that the electron non-relativistic. We have solved the non-relativistic Schrodinger equation for it. And consistently, we find that the velocity of the electron is 1/137 of the speed of light. It's actually physics trivia. If somebody asks you how fast is the electron in the hydrogen atom, about 1% of the speed of light. Of course, if you had solved the non-relativistic Schrodinger equation. And you solve it for, let's say, a naked uranium nucleus where z, the charge of the nucleus, is 92, then you find that this fine-structure constant times z is on the order of unity. You would find that the electron moves at the speed of light. And then you realize, gosh, I've solved the wrong equation. Because a non-relativistic Schrodinger equation when the solution is that something moves at the speed of light, I'd better start with a different equation. But here, we find we are consistent. An electron in the hydrogen atom for low nuclear charges of a few hydrogen, helium, and so on is non-relativistic. So let's just finish that. The electric field is the electric field felt by the electron which orbits the nucleus on the 1s shell. And this is 5.1 times 10 to the 9 volts per centimeter. So everything I've constructed here out of the three fundamental constants, e, m, and h are typical for the 1s electron for the ground state of hydrogen. We got the typical lengths, the Bohr radius, the typical energy, the Hartree, the typical velocity, which doesn't have a name, and the electric field experienced by this electron. So let's now talk about alpha. So alpha is dimensionless. It is, you know, if a constant has dimension like h-bar, like c, actually the value of it reflects our system of metrology. If you define the second in a different way, h-bar will change. c will change. So a lot of constants are not fundamental constants, being fundamental to the physics at hand. They are more kind of translating our metrological system into the equations we use to describe our system. But if something has no dimension, it is not related to a unit like the kilogram or the second, it has really fundamental importance. So, therefore, alpha is the fundamental constant in atomic physics. And if you have a fundamental constant, ultimately, there should be a theory of everything which should ultimately predict the value of alpha. Which, ultimately, we predicted by a complete theory. So alpha, the fine-structure constant, is smaller than 1. It's 1/137. And the fact that alpha is smaller like 1 is often phrased in these words. That since alpha is much smaller than 1 that this implies the electromagnetic interactions are weak. OK, I want to explain that. I mean, I've heard this many, many times. But what does it mean interactions are weak? So let me give you sort of my, in 90 seconds, sort of my spiel on it. Why does alpha mean that the electromagnetic interactions are weak? Well, we have to compare the electromagnetic interaction to something else. And the result will be for this situation I create, the electromagnetic interactions are weak. Electromagnetic interactions of the Coulomb field gets, of course, stronger, and stronger, and stronger the closer you move two charges together. So what does it mean that the Coulomb interaction is weak? Weaker compared to strong interactions or other interactions. Well, let me try to justify it as follows. We cannot go to epitary small distances. We can do it in classical physics, but not in quantum physics. So if you go to very small distances, if you localize particles very tightly, they have a lot of momentum uncertainty. The momentum uncertainty means energy uncertainty. And the energy uncertainty may mean that we can create electrons and positron pairs. So the moment we have an energy uncertainty, by our definition of bringing two particles close together, and this energy uncertainty is larger than the rest mass, we have to be very careful. We can no longer use a single particle description, or our concept of single particle physics breaks down if you have seen the particles prepared with an energy uncertainty, which would spontaneously create more particles. So therefore, let me postulate that our picture how we think about those interactions, arranging two charges and writing down what the Coulomb energy is. That if energy uncertainties become on the order of the rest energy, then the concept of single particles breaks down. Of course, that's not the end of physics. You need now a field theory for particles where particles are just excitations in your field. But here in atomic physics, you want to describe an electron bound to a nucleus. And we want to use those concepts. So let's just sort of say, what does it mean with delta e is mc squared? Well, that means the momentum uncertainty is on the order of mc. And with this momentum uncertainty, I can localize particles to within h-bar over mc. And this just turns out to be the Compton radius of the electron. So, therefore, I should be careful when I talk about the Coulomb energy between particles if I would go closer than the Compton radius. So, therefore, let me compare now the Coulomb interaction at the Compton radius to something else. So these Coulomb energy is e squared over the radius. The Compton radius. And this turns out to be e squared, mc over h-bar. So unless I want to get into quantum field theory of particles, before I need a different description, the strongest Coulomb interaction I can create by putting two particles at the Compton radius is that. And I can now compare this Coulomb interaction, the Coulomb energy, to the rest energy, mc squared. And, well, if I take what I had above, e squared mc over h-bar, I divide by mc squared. I find that e square, that the result, the ratio, is e squared over h-bar c. And this is just alpha. So in other words, what I've shown to you, if you try to bring two charges as close as possible before spontaneous pair production sets in, then you find that the Coulomb energy is not in the dominant energy in the system. The dominant energy is the rest energy, is the mass of the electron itself. And the ratio of those two energies at this point is, of course, completely independent of what metrological system you use for energy lengths and such. It's really something which says something fundamental about the nature of interaction. And what I just presented to you leads to the statement that the Coulomb interaction, the electromagnetic interaction, is weak. Because the fine-structure constant is much smaller than unity. Of course, if you use a nucleus of uranium, naked uranium, and people have ion traps where they create uranium 92 plus and then they add an electron, you're really studying very interesting physics. You're studying the physics of an electron for which the effective fine-structure constant is on the order of unity. And that's why people are very interested in it. And that's one area of current research. Any questions? OK, so that's our little excursion about units. Let's talk now about some general properties of one electron atoms with cores. A lot of research in our field is done with alkali atoms. Alkali atoms are not the hydrogen atom. But they have one outer electron. So they are hydrogen-like. And now we want to sort of figure out what is the main difference because for one outer electron in rubidium and sodium. And the electron in hydrogen. Well, if we have an electron. And this electron orbits around. In the alkali atom, there is an ionic core which has a charge of a z plus. But then in this sort of compact core, there are also c minus 1 electron. So the electron, the outer electron pretty much feels the electric field of a single charge, but there is a the of the core. So what I want to discuss with you is now what is the leading correction to the properties of this atom? What is the leading correction to the hydrogen-like wave function due to the fact that we have an ionic core and not a proton. So for hydrogen, we would have the Rydberg formula. that for principal quantum number n, the energy is-- oh, just one second. Yes. OK, the hydrogenic energy would be z squared times the Rydberg constant divided by n squared. I'm confused about the factor of z squared, which I clearly have in my notes. I have to read up something about it. Let me take it out here. You know, we have a charge of z. But what I'm talking about is one electron which is very far away from the nucleus. And this one electron fields, in effect, only a single charge. And so, ultimately, you would expect that the Rydberg spectrum, if you go to higher and higher end, would actually converge to the Rydberg spectrum of hydrogen which would not have the factor of z squared. I hope I'm not overlooking something, but I'm just correcting my notes on the fly. So this would simply be what an electron would do in the Coulomb field of a single charge. And the question is, what is the leading correction to this formula? And I want to ask it as a clicker question. So that's the effect, that we have an ionic core. Does it make a constant offset to the binding energy? Or does it make the correction which, of course, is 1 over n squared. This would just scale the overall spectrum. It would actually lead to a modified rescaled Rydberg constant. Or is the correction higher order in n? Or finally, is it a correction which changes the effective principle quantum number n? Maybe you know the answer or maybe you want to try to guess the answer. So these are the four choices. Of course delta has different units. I just used the same symbol for the correction term. But depending whether it appears it has units of energy or if it appears in the denominator with n, it is dimensionless. AUDIENCE: Aren't these questions also C and D [INAUDIBLE]? PROFESSOR: Yes, this is the first thing I wanted to tell you, that answers C and D are the same. Because if we assume delta is small and you do a Taylor expansion of the denominator, you just get this. So c and d are actually equivalent. And so if I add up 20 people voted for c and d, which is equivalent. So this is equivalent by Taylor expansion. And, indeed, this is also-- this time there's two correct answers. So let me quickly derive it. The derivation is short. And it adds some insight. We want to do perturbation theory. And in perturbation theory, we simply take the wave function of the simplified Hamiltonian and ask what is the energy correction due to the fact that we have a finite core size? Since the finite core is near r equals 0 at the origin, we only need the scaling of the wave function close to the origin. And this is r to the l. And we have n to the power 3/2, as we discussed last week. So our Hamiltonian is now the Hamiltonian of the hydrogen atom plus a perturbation term. And the perturbation term is the derivation of the potentially experienced by the outer electron, the deviation from a pure Coulomb potential. So the energy correction, en, is the expectation value of the hydrogenic wave function with the perturbation Hamiltonian. And the only thing we have to know about the perturbation Hamiltonian is that this Hamiltonian is localized around the origin. And then we immediately find because of the scaling of the wave function with n that this is 1/n cubed. And it's proportional. And by factoring out the Rydberg constant, I can parameterize this matrix element with a quantity, delta l, which is dimensionless. So that means that the binding energy is the hydrogenic binding energy plus this correction. And then to leading order in delta l, it's identical to this result. And this parameter, delta l, which is characterizing a whole Rydberg series for all n values for given l, this is called the quantum defect. So people realized early on in the early days of quantum mechanics before they understood it that the spectrum of many atoms followed a formula which was not 1/n squared is hydrogen. It was 1n minus delta l squared. They didn't understand it. But this is, of course, now the derivation. There are many other derivations which you may enjoy reading about it in our atomic physics wiki. There's a derivation using the semi-classical approach and using the very nice physical picture that an electron, when it comes close to the core, experiences scattering phase shift. And this scattering phase shift is directly related to the quantum defect. Or you can make a model Hamiltonian which is exactly solvable where the perturbation Hamiltonian is not completely localized at the core. But it's proportional to 1/r squared. And then you can exactly solve it because you have already a term which is 1/r squared in your Schrodinger equation. Which is the centrifugal term. So therefore, the perturbation is only redefining the centrifugal term. It's redefining what l is. And eventually, you can solve it directly. Questions? OK. Then let me spend five to 10 minutes on spectroscopic notation. So the next five or 10 minutes, how we describe the configuration of an atom, well, I don't particularly like to teach it. Because it's more nomenclature about old-fashioned symbols. On the other hand, if you're working with atoms, you have to learn the language how to describe atoms. And I also know that an appreciable fraction of oral exams there will be one person in the community and say, what is your favorite atom? And what is the configuration of your atom? So it's something if you're an atomic physicist you're supposed to know. So the spectroscopic notation, the term designation focuses on the fact that if you have an isolated atom, we have angular momentum conservation. And so we have at least two quantum numbers. Which are sometimes also two good quantum numbers. We have some approximate quantum numbers where we have additional terms which break certain symmetries. But an isolated atom lives in isotropic space. The total angular momentum of this atom is conserved. It's absolutely conserved. It's absolutely good quantum number. And the good quantum numbers is the total angular momentum, j, and its protection energy. So in the language of atomic physics, we call j a level. It's different from states. So one level has now 2j plus 1 sub-levels or states. So usually when we talk about a level, we assume the level has degeneracies because there is the Mj quantum number. So j, you're talking about electronic structure. So j can have, when we have an isolated atom, can have contributions from several electrons. It can have contributions and these electrons can contribute through spin, s, and orbital angular momentum, l. In many situations, especially with alkali atoms, the inner core is completely field shell. There is no s, no l from the inner electrons which contribute. And all the contribution with the angular momentum comes from the outer electron. Especially for the lighter atoms. The non-relativistic atoms. The different electrons undergo ls copying. In other words, if you have multiple electrons, their orbital angular momentum couple up to the total orbital angular momentum, l. And all the spins couple up to the total spin, s. So therefore, before we introduce spin orbit copying, l is a good quantum number. s is a good quantum number. And then they couple to a good quantum number, j. Of course, once l and s couple to j, orbital and spin angular momentum precess around the total angular momentum. Anyway, I just want to say when I talk about j, what are possible ingredients? So let's assume we have an atom which has total angular momentum, j. And which is the sum of orbital angular momentum and spin angular momentum. And then in this case, we use a term designation. A level is designated by a term which is written as l, the value of orbital angular momentum. The spin multiplicity, 2s plus 1 is an upper left index. And a lower right index is j. And of course if l is 0, we use the letter s, p, d. This is sort of the historic letter designation for l equals 0, 1 and 2. So in other words, if you have an atom where the total angular momentum is composed of orbital angular momentum and spin, you can always write this symbol. And this symbol is the term designation which characterizes the state, the ground state or an excited state of your atom. If you have the hydrogenic atom, often you precede the term by the principal quantum number, n. So let me give you an example. If you have the sodium atom, the outer electron has n equals 3. It has 0 orbital angular momentum. It has spin 1/2. And 2s plus 1 is 2. And the total angular momentum is 1/2. If you go to the first excited state, you're still in n equals 3. But you have promoted the electron from an s state to p state. So, therefore, the orbital angular momentum is now 1 designated by p. The spin is still spin 1/2. But now orbital angular momentum of 1 and spin 1/2 can form a total angular momentum which can either be 1/2 or 3/2. So if you're asked what is the state you prepare your atom in, you would give it a symbol 3 doublet p 1/2. And I've explained to you what it means. There is one addition. And sometimes you want to not just mention what is the principle quantum number of the outer electrons. Sometimes you want to specify the whole configuration. So this would mean you want to sort of build up the electron shell and say that I have two electrons in 1s, two electrons in 2s, one electron in 2p, and so on. So you use, I think this would now be beryllium atom? 1s. So 1s is hydrogen, 1s2 is helium. Then we go to lithium. AUDIENCE: [INAUDIBLE] is boron. PROFESSOR: It's boron, no? OK, so this would be boron. So what we use is here we use the products. We use products of symbols n, l, m. So to come back to the example of sodium, so sodium is filled up. As in the first shells. So it's 2p6. And then we have one electron, the outer electron in 3s. However, let me point out that this way to specify the configuration strongly depends on a hydrogenic model. It assumes that the electrons are non-interacting. And is, therefore, an approximation. In contrast, the term designation with a total angular momentum is always exact. Well, at least the total angular momentum is an exact quantum number. Whereas the configuration is based on the independent electron approximation in hydrogenic orbits. And usually when you have a real atom and you calculate with high precision what the electronic wave function is, you find actually that total many-body wave function is a superposition of many such configurations. But as long as one configuration is dominant, this configuration designation makes sense. Any questions? OK, so we come back to the hydrogen atom when we discuss smaller features of the energy levels. Fine structure, hyperfine structure, and so on. But in our discussion of electronic energies, that's all I want to say about one electron atoms. So let's now proceed and discuss the helium atom. So we want to understand now what are the new effects when we have not only one electron, but two electrons. And don't worry. We are not proceeding to three, four, five electrons. I think to go from one to two, we actually capture the most important state. Namely, the interaction between the two electrons and what the results of that is. For the helium atom, there are some excellent treatments in standard textbooks of quantum mechanics. One is the famous quantum mechanic text by Cohen-Tannoudji et al. But also the text of Gasiorowicz. The reason why I have added the helium atom to the curriculum is because it is a simple system where we can discuss singlet and triplet states. And singlet and triplet configuration is important for population of inoptical lattices for quantum magnetism and such. Actually, you can say if you have two electrons and they align in a triplet state or a singlet state, one you can say is ferromagnetic. The other one is paired and antiferromagnetic. It's the simplest example where we can discuss magnetism. So that's my motivation why I want you to know something about the helium atom. So, therefore, let's now discuss energy levels of helium. And let's just start with the most basic model. The helium atom has two charges. So if you regard it as a hydrogen problem and we put two electrons into the 1s state, we would expect that based on the hydrogenic model that the binding energy of that is, per electron, is the Rydberg energy as in hydrogen. But now we have to scale it with z squared, the nuclear charge. And this gives us a factor of 4. So we would expect that per electron, the binding energy in the more simple hydrogenic model is 54 electron volt. So that would mean that the binding energy of the ground state is minus 108 electron volt. However, the experimental result is that it's only 79 electron volt. So we find that there is a big discrepancy of 29 electron volt. Which is really huge. So what is responsible for this big discrepancy? Well, what we have neglected, of course, is the interaction between the two electrons. So we can fix that in the simplest way by keeping the wave function from the hydrogenic model. But now calculating the electronic energies, the electron-electron energy, by using the electron-electron interaction as a perturbation operator. So we still use as the wave function for the ground state, electron 1 in the 1s state. So 1-0-0 is a designation for n, l, m for the hydrogenic quantum numbers. And we assume that the ground state is simply the product of two electrons in the 1s state. So if I calculate for this perturbation operator the expectation value with this ground state, we find that there is an energy correction which is 34 electron volt. So this removes most of the discrepancy. You can improve on it by a variation wave function if you use hydrogenic eigenfunctions as your tile wave function. But you are now calculating those hydrogenic wave functions not for nuclear charge 2, but in nuclear charge c star, which you keep as a variation of parameter. You find that you find even better wave functions. And you can remove 2/3 of these remaining discrepancy of five electron volts. This variational wave function is left to you as a homework assignment. So where z is z equals 2 is replaced by a variation of parameter. Anyway, that's all I want to tell you about the ground state of helium. It's pretty much finding a wave function which correctly captures the Coulomb energies. The interaction between the two electrons. But what is much more interesting, and this is what I want to focus on for the rest of this lecture, is what happens in the excited state. Before I do that, let me just tell you what we just discussed. So we have to this hydrogenic estimate and eventually the Coulomb energy raises the energy level to what we have just discussed. But, you know, as the ground state has no degeneracy. And so we all talk about quantitative shifts. However, when we go to the excited state, we will find the degeneracies. And degeneracies are much more interesting because something which was degenerate can split. You have two different terms. So suddenly there is richer physics. So, therefore, we want to discuss now the excited state. So starting again with the hydrogenic model. In hydrogen, the 2s and 2p state are degenerate so we have two configurations contributing to the same energy. 1s 2s and 1s 2p. The binding energy in the hydrogenic model is a quarter of a Rydberg. Rydberg over n square and ns2. We have to scale it by z square. And we find 13.6 electron volt. But now what happens is that we have to introduce the Coulomb energy between the electrons. And if you do that, it shifts up the levels in different ways. So this is 1s, 2s. This is 1s, 2p. So why is this different? Well, you can say the following. You have two electrons. And you have the helium nucleus. And you first put in the 1s electron. And now the second electron when it is in a p state, in a 2p state, it's further out. And it pretty much experiences out there the charge of the helium nucleus shielded by the 1s electron. And, therefore, it sees in effect a smaller nuclear charge. Whereas, the two s electron penetrates deeper. Gets closer to the nucleus and will still realize that the nucleus has a charge of 2. And not a shielded charge. So, therefore, you would expect that the shielding effect due to the innermost electron is more important, has a bigger effect for the 2p electron then for the 2s electron. So let me just write that down. So the 2p electron it sees a shielded nucleus. In other words, what it experiences more as an inner core, the Coulomb potential of helium plus. And not so much of helium 2 plus. This is due to the 1s electron. And, therefore, the 2p electron has a smaller binding energy. It's actually comparable to the binding energy of the 2p state in hydrogen. Which is on the order of 4 electron volt. And this effect is smaller for the 2s state. So let's go back to the energy diagram. So we have now the situation which I just described. We have two degenerate configurations for in the hydrogen model. And when we add in Coulomb energy between the electrons, there is quite a big splitting of several electron volts. But now, each label undergoes further splitting. And this is what I want to discuss. So we are still sorting out just the preliminaries. What I really want to discuss with you is the singlet and triplet thing. But now we are there where we can do it. So if you have a configuration with 1s, 2s, there are two possibilities for the total angular momentum. Two electrons in an s state, there is no orbital angular momentum. But there are two spins, 1/2. And they can add up to 1 or can add up to 0. So, therefore, we will have two different terms. One is singlet s0. And one is triplet s1. And this splitting is 0.8 electron volt. And this is what we want to discuss in the following. For completeness, but the physics is similar, let me mention that the 1s, 2p state also gives rise to two terms. We have now the total orbital angular momentum, p. Orbital angular momentum of 1. The spin angular momentum is 0 or 1. So we have singlet and triplet. And the total angular momentum is 1 in this case. Or in this case, 2, 1, or 0. And the splitting in this situation is also on the order of a fraction of an electron volt. So what we want to understand now is why do we observe a splitting between those two levels which seems to depend on the spin? How can the spin cause a splitting? Because the spin so far has not appeared in our Hamiltonian. We rarely have a Hamiltonian which has only the Coulomb energy. And the spin is not part of it. So we don't have a magnetic field to which the magnetic momentum of the spin would couple. And also we have not yet introduced spin orbit coupling. But if this is on your mind, take it off your mind. Spin orbit coupling is a much, much smaller effect. Energies on the order of 1 electron volt, you just cannot get from spin orbit coupling. Spin orbit coupling is smaller than electronic energies, as I will explain to you on Friday, is smaller by the fine structure constant. So the typical scale for spin orbit coupling is maybe 10 or 100 million electron volt. It's much smaller. So, therefore, we want to understand now why do we have a spin dependent energy. Also we haven't coupled at this point the spin to any field. OK, so we are focusing now on the splitting. So we have the 1s, 2s configuration. We get two terms, as I just discussed. One is singlet and one is triplet. And still using the hydrogenic model of non-interacting electrons, we want to write down the wave function. So the wave function of the twin electrons is we have one electron in the 1s state. We have one electron in the 2s state. But now since we have fermionic atoms, we have to correctly symmetrize it. So whether we want the symmetric or antisymmetric combination, we exchange the two electrons. So now we have r2 and r2 in reverse order. Of course, the total wave function for two fermions has to be antisymmetric. But the total wave function is the product of the spatial wave function which I just wrote down times the spin wave function. The spin wave function can be antisymmetric and symmetric. And the antisymmetric spin wave function has to combine with the symmetric spatial wave function, and vice versa, to make sure that the total wave function is antisymmetric. And the correct description for fermions. The designation here, symmetric and antisymmetric for the total wave function reflects the spatial part. The total wave function, of course, including the spin wave function is always antisymmetric. OK, so we have two wave functions. One has a symmetric spatial wave function. The other one, an antisymmetric. The symmetric spatial wave function has an antisymmetric spin wave function. And that means we have s equals 0. That's up, down, minus down, up is antisymmetric. Where as the antisymmetric spatial wave function goes together with the symmetric spin wave function s equals 1. So this is the situation which gives rise to that triplet s1 term. And this here is the singlet s0 term. OK. As long as we have non-interacting electron, the two wave functions are degenerate. But now we want to bring in the Coulomb energy between the two electrons which we had already discussed before. And if you calculate the energy using this as a perturbation operator, well, remember the wave function had two parts. It was 100, r1, 200, r2. And the part where our 1 and our 2 were flipped. And now if we have the wave function, the perturbation operator, we get a total of 4 terms. 2 times 2. We have to sort of x it out. And we will then have the sort of diagonal parts. And we have the parts which are off diagonal. And for the off diagonal parts, it matters whether we had the plus or minus sign. You know, if you have plus, plus and minus, minus, you give a positive contribution. But if you connect the wave function before and after the operator, plus with minus, we get minus signs. So in other words, we have one contribution where it doesn't matter whether we have the symmetric or antisymmetric spatial wave function. But then we have another term where it matters whether we have the symmetric and antisymmetric wave function. That's where the plus or minus sign from the symmetrized wave function appears. So we have two contributions now to the energy correction. One is independent of the spin wave function whether we have the symmetric or antisymmetric configuration. The other one is not. The first term is called the Coulomb energy. The second term is called the exchange energy. So what we find is-- we have to trace back the sign, but you find that the triplet state symmetric in spin. And antisymmetric in the spatial wave function. Has the lower energy, is more strongly bound because the antisymmetric spatial wave function reduces the repulsive interaction between the two electrons. So in other words, we do not have any spin term in the Hamiltonian. It's just that whether the spin wave function is symmetric or antisymmetric, it requires the spatial wave function to be the opposite. And now when we calculate the Coulomb energy for the spatial or antisymmetric spatial wave function, we find a big difference. And the big difference is the exchange energy. So it is a spin dependent term for the total energy. But what is behind this energy is simply the Coulomb energy. So the spin through the symmetry of the wave function leads to a difference in the Coulomb energy. And actually what I'm telling you is the explanation why we have magnetism at room temperature. It was Heisenberg's idea when he realized for the first time what can cause ferromagnetism. It was pretty much the model of the helium atom expanded to many, many electrons in a lattice. So the result is that the Curie temperature of thousands of degrees, we have magnetism below 1,000 degrees. This energy scale is an electronic energy, is in Coulomb energy scale and not an energy scale where spin interactions comes into play. If it were not for the exchange energy, we would not have magnetic materials above 1 calorie. So this is what you see here in the helium atom. How speed leads to an energy splitting which is an electronic Coulombic energy splitting. OK, let's make it maybe even more obvious. The above equation can be rewritten. So this energy splitting can be written as a constant, alpha, plus a constant beta, times the product of s1 and s2. Well, what happens is-- let me just give you as a sidebar, the product of s1 and s2 can be written as minus s1 square, minus s2 square, plus s square. s1 square is one half times 3/2, it's 3/4. s2 squared is 3/4. And s square is either 0 or 2 depending whether you're in the singlet or triplet state. In other words, I just want to with the sidebar, which you have seen many, many times, just remind you that the product of s1 and s2 has only two values. One for the singlet state, one for the triplet state. So, therefore, if I have a singlet level and a triplet level, I can always parametrize it like this. And I can make it more obvious by showing what this formula on the right hand side has two values. One for s equals 0. And one for s equals 1. And this can be rewritten as alpha. You find a more explicit calculation in the [? video, ?] but it's also just one more line of algebra. You can write it in the following way. So, therefore, the alpha and beta parameter are just a way of, you know, you have two energy levels. And with two constants, you can always describe two energy levels. Here, I've done it with alpha and beta. But before, I did it with the Coulomb energy. And the exchange energy which we obtained in this equation when we perturbatively calculated the integral. So by writing it as s1 dot s2, I even suggest that the two spins interact like a dipole-dipole interaction. But they don't. It comes from the Coulomb interaction, but it is equivalent to a gigantic dipole-dipole interaction. So, therefore, the conclusion of this is that what I have derived for you for the helium atom, it looks like a ferromagnetic spin-spin interaction. And, well, it looks like it, it is actually an effective ferromagnetic spin-spin interaction. However, the coupling is purely electrostatic here and not magnetic. Questions? OK, yes? AUDIENCE: So since we're still using the hydrogenic wave function as a basis, how closely does this get to the actual measured values? PROFESSOR: The question is, how close do we get with hydrogenic wave functions to the actual measured energy? I don't know. I don't know the exact numbers for the excited state. I assume that it's similar to the ground state. You saw that for the ground state, we had a big discrepancy. Most of that was closed by using hydrogenic wave functions. And just calculating the perturbative terms. So you pretty much get qualitatively or semi-quantitatively, you get the picture out of it. And unless you're really interested in the absolute values, you can stop there. But one way to go further and reduce the discrepancy by 75% is to use this variational wave function where you use hydrogenic wave function, but you use z, the nuclear charge, as a variation of parameter. And I mean, this is, I mean, it's amazing. I mean, you have a two electron atom and you use a hydrogenic wave function with one free parameter. And you get binding energies which are on the order of 70 electron volt accurate to within better than 3%, 4%, 5%. But by adding other terms or using a little bit more fancy wave functions, I'm sure you can get further and further. OK, so the last thing I wanted to discuss is the new feature of two electrons is that we have singlet and triplet levels. So we have a letter of states which are singlet states. And then, and this is what we just discussed, let me just make dashed lines. Because of this ferromagnetic spin-spin interaction, we have triplets states which have lower energy. So these are n equals 2 triplet states. n equals 2 triplet states. j equals 1. j equals 0, 1, 2. So of course, there are transitions between those levels. And form a p state, you can have transitions down to an s state. The question I have is what about possible transitions between triplet and singlet? So what I want to ask you know with a clicker question is what kind? So those transitions here are transitions between singlet and triplet. And the technical term for transitions between singlet and triplet are intercombination lines. So the question I have for you is what fields or couplings drive singlet triplet transitions? So I want you to think about the model we have discussed so far. All we have is Coulomb energy. Coulomb energy between the nucleus and the electrons. And between the electrons. That's it. We do not put any other terms into the Hamiltonian. And now we have obtained those wave functions. We have obtained those energies splittings. And now we want to ask are there transitions possible? So one possibility is that we can drive the transition with optical fields. Let's say our dipole operator. Which many of you have encountered in life. We have already discussed rotating magnetic fields in the first part of the course. Is it possible to use both magnetic fields and optical fields? Or the last answer is none of the above. It's not possible with any of those fields to create any kind of transition between singlet and triplet. OK, all right. We have to spend the first 10 minutes on the class on Friday to discuss that. But the answer is none. There is no way how you can get a transition between singlet and triplet using the approximations for the description of the helium atom we have done so far. Since I don't want to end with such a cliffhanger, let me just say, we have actually a selection rule which says, we are not changing the total spin. And you would say, hey, come on. Can't I just take a magnetic field and flip one of the spin, go from a triplet to a singlet state? Well, the answer is no. And let me just give it to you qualitatively and deformalize it on Friday. First, as far as transverse magnetic fields, rotating magnetic fields, are concerned. transverse B fields. Remember, transverse B fields create some f of the spin. But both spins precess equally. So you can never even classically, you cannot change the angle between the two spins. When the two spins are interparallel, they stay interparallel. When they stay parallel when they are parallel, they stay parallel. Or to say the transverse B field means we have a coupling, sx, sy. And sx and sy can be written as letter operators, s plus and s minus. And as you know, s plus and s minus only change the magnetic quantum number. But they do not change the value of the total s. In other words, if you have a spin which is s equals 1 triplet state, you can change the angle, how the spin equals 1 points. But it still will be spin 1 where both electrons are aligned parallel. There is no magnetic field which can selectively talk to one spin and rotate spin 1 with respect to spin 2. And coming to the other questions about optical fields with a dipole operator, the answer is also a resounding no. Because the dipole operator of the electromagnetic field, the dipole operator acts only on the spatial wave function. Not on the spin part. So a laser beam through the dipole operator can never ever flip the spin. It only acts on the spatial wave function. So anyway, I wanted to just give you the answer why you cannot go from singlet to triplet state with sort of our standard operators, with the electric dipole operator or with the rotating field. And I hope it was worthwhile to explain in detail why each of them cannot do it. But on Friday, I explained to you that there is a general symmetry behind it that even more fancy combinations will not be able to do that.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
15_Atomlight_Interactions_IV.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Well, the result of which we obtained on Wednesday for spontaneous emission for the Einstein A coefficient regarded as an accomplishment as a highlight of the course. We've worked hard to talk about atoms and electromagnetic fields. And ultimately, to deal with spontaneous emission, it was not enough to put a semi-classical light atom interaction, dipole Hamiltonian, Rabi oscillation and such to put that into the picture. We really needed a quantized version of the electromagnetic field. And this is a result when an atom is excited and interact with all of the empty mods of the vacuum. And be summed up the probability that photon is immediate in any of those modes. And by doing, kind of, all of the ever reaching with intensity of state, and for all the possibility of actions we obtained. The famous result for the Einstein A coefficient, which is also the natural aligned width of the atomic excited state. Do you have any questions about the derivation or what we did last week? Then I think I will just continue and interpret the result. So we go to result for an Einstein A coefficient. And well, the question is, how big is it? Well it has a number of constants. And if it is-- let's discuss it now in atomic units. Well, if we assume the frequency or the energy is on the order of Rydberg-- that's sort of the measure for an electronic excitation in the atom-- we assume the dipole matrix element is one. That means one per radius. Since we have pretty much set everything one and expressed everything in atomic units, it means that the speed of light is-- remember? The velocity of the atom in any correspondent was alpha times smaller than the speed of light. But the velocity of the atom is one atomic unit. So therefore, the speed of light in atomic units is one over alpha. And that means that if you look at the formula, there is the speed of light to the power of 3 in the denominator. And that means that in atomic units the Einstein A coefficient is alpha to the 3, which is 3 times 10 to the minus 7. So that means that the ratio of this spontaneous emission rate, which is also the inverse lifetime and, therefore, the natural alignments of the excited state. Relative to the transition frequency, so the damping of the harmonic oscillator or the two level system relative to the NFC spacing of the oscillator. It's small. It's actually alpha cube. So if you take this 3 times 10 to the minus 7 and multiply it with the atomic unit of frequency, which is 2 Rydbergs. We obtain on the order of 10 to 9. And it's a rate of 10 to the 9 per second. And that means that the lifetime of a tubercle atomic level is on the order of 1 nanosecond. Well, often it's 10 200 nanosecond because many transition frequencies are smaller by quite a factor than the atomic unit of the transition frequency. Remember, the Rydberg frequency would be deep in the UV. But a lot of atoms have transitions in the visible. I highlighted already when I derived it that the spontaneous emission has this famous omega cube dependents. And that this actually important to understand why lower lying levels-- excited hyperfine levels-- do not radiate. So let me just, kind of, formalize it. If I would now estimate what is the radiative lifetime for a transition, which is not as I just assumed in the UV or in the visible. Let me estimate what is the radiative lifetime to emit a microwave photon at a few gigahertz? Well, the microwave frequency of the gigahertz 10 to the nine is five orders of magnitude smaller than the frequency 10 to the 14 of an optical transition. So therefore, this is 10 to the 15 times longer. And if you have, typically, one or ten nanosecond for an electronic transition. That means that this spontaneous lifetime for a microwave transition is seven months. If in addition we factor in that hyperfine transitions have an operator which is a Bohr magneton and magnetic type of operator, not an electric dipole and we discussed that the Bohr magneton is actually when we discuss multiple transitions we discuss that the Bohr magneton is alpha times smaller than a typical electric dipole moment. So therefore, a magnetic dipole transition is alpha times weaker than an electronic dipole transition. And that means now, if you multiply months, which we obtain by the frequency scaling, again, by alpha square for the weakness of the magnetic dipole, we find that atomic hyperfine levels have a lifetime, which is on the order of 1,000 years. And this is why it's very safe to neglect those transition in the laboratory and assume that all hyperfine states in the ground state manifold pretty much don't decay and are long lived. Questions? OK, so with that we have discussed spontaneous emission. Let's go through a few clicker questions to discuss the subject and verify your understanding. So the first question is can an E2 transition, which is a quadruple transition, can you drive it by a plane wave? Or does it need a laser beam which has an intensity gradient such as a focus laser beam. Yes or no? OK. Well the answer is yes. You can just use your laser beam. If a quadrupole transition would require a gradient, it would really require a gradient over the size of the atom. And that would be extremely hard to achieve. Fortunately, this is not the case because what happens is we actually assumed in the derivation that we had a plane wave into the IKR. And then do the tailor expansion. And it was these part of the tailor expansion of a plain wave, which gave rise to the matrix element for the quadrupole transition. So a plane wave laser beam is sufficient to drive higher multiple transitions. Next question. Can spontaneous emission be described as a stimulated emission process by the zero point field. So by the zero point field, we know the electromagnetic wave is a harmonic oscillator. And a harmonic oscillator has a ground state. And in the ground state you have zero point motion. So there is an electric field, even when we have the vacuum state. And the question is, can spontaneous emission be described as simply being stimulated emission but now do to the silver point fluctuations of the electromagnetic field. OK. The answer is it depends. It depends if you just want to make a qualitative hand waving argument. Then I would say you are correct. You can say that the electromagnetic field of the vacuum stimulates a transition. But when I said described, I meant if you can get it quantitatively correct. And there the answer is actually no because the energy of the electromagnetic field is n plus 1/2 h bar omega. Whereas, this spontaneous emission of eight is n plus 1. So you have half a photon verse of extra energy. But this spontaneous emission is sort of like the spontaneous emission is the rate, which would be stimulated by an extra energy of h bar omega. So in other words, you would get the answer wrong by a factor of two. I think decoding deeper in the electrodynamics description of spontaneous emission you would identify two terms for spontaneous emission. One is actually the stimulation by the vacuum field. But there is another term called radiation reaction. So there's, sort of, two terms. Trust me. If not, there are hundreds of pages in [INAUDIBLE], which is books written about it. And in the ground state, the two terms destructively interfere. Therefore, you have no spontaneous emission in the current state, which is reassuring. But then in the excited state the two terms constructively interfere. And therefore, you get spontaneous emission, which is twice as much as you would get if you just look at the stimulation by the vacuum field. So the answer is not quantitative but half of it, yes, can be regarded as stimulated emission by the vacuum fluctuations of the electromagnetic field. OK. We emphasized that spontaneous emission is proportional to omega cube. The question is now what is the dependence in one dimension? If everything the atom can only emit in one dimension, everything is one dimensional, put the atom into a waveguide. So your choices are omega cube, omega square, or omega-- well, if you press D, none of the above. But I can already tell you it's one of those three. So everything the same. But we are in one dimension. The world seen by the atom and by the electromagnetic waves is one dimensional. Yes, it's correct. As you remember, out of the omega cube dependence. Omega square came from the density of states. And what is omega square in three dimension becomes omega in two dimensions and constant density of state in one dimension. So therefore, in one dimension, we are only left with the omega dependence. OK, so there is one factor of omega, which does not come from the density of state. And the next question is where does the other power of omega come from? As we discussed, it's not the density of states. So we have three choices. One is it comes from the atomic matrix element, it comes from the dipole approximation, or it comes from the quantization of the electromagnetic field. OK, the majority got it right. It's a field quantization. Sort of remember when you write down the electric dipole Hamiltonian, in the quantized version, there is a perfecter, which is electric field of a single photon. So if you have a single photon, it gives rise to an electric field squared, which is proportionate to h bar omega. And this is, sort of, the normalization factor. Two more questions. We talked a lot about the rotating wave approximation. And we also talked about it for a spinning system driven by magnetic field. If you have a rotating magnetic field, we do not need the rotating wave approximation because if you drive a spin system with a rotating magnetic field, we have only the co-rotating term. The question I have now for you is whether the same is correct or not for an electronic transition. So therefore, the question is for electronic transitions do we always get the counter rotating term. And if you want to have a simple Hamiltonian, then we do the rotating wave approximation. So the question is is the rotating wave approximation necessary because we always get the counter rotating term for the electronic transition, then the answer is yes. Or are there examples where the system is exactly described by only one term? The core rotating term. I will come back to that later in the class. But I thought it's a good question. OK, let me give you the answer. I actually coincide with everybody in the class here because I would tend to say no because there are situations where the counter rotating term can be zero due to angular momentum selection rules. However, if you have an electronic transition and you have a sigma plus transition to one state, there's always a possibility for sigma minus transition. So you usually get both. But if you apply an infinitely strong magnetic field, then the m equals minus 1 state can be moved out of the picture. You have only, let's say, the m equals plus 1 state. And then selection holds mean that the counter rotating term is vanishingly small. But it's an artificial situation. So you can all claim credit for your answer. Finally, the last question is about the Lamb shift. We are now talking about electronic transitions. And the question is Lamb shift-- if it's due to the counter rotating term. In other words, if you have a situation where the counter rotating term is zero, as we just discussed in the previous example that there may be situations. Somewhat artificially but you could arrange for it. The set then implies that there is no lamb shift. So yes or no. Is the lamb shift caused by the counter rotating term involved in electronic transitions? OK. OK, well what else is the lamb shift? It is the AC stock effect of the counter rotating term. So is it due to the counter rotating term? Yes, of course. The lamb shift is the AC stock effect caused by the vacuum fluctuations. That's what it is. But we come to that because I want to discuss later today some aspects of the fully quantized Hamiltonian. And we will, again, in the fully quantized picture see the operators, which are responsible for the core rotating for the counter rotating turn. And then I will point to the operator, which causes a lamb shift. But before I continue, any questions about the questions? Collin. AUDIENCE: When you derive the amplitude in the electric field due to the single photon-- PROFESSOR: Yep. AUDIENCE: I always get the factor of two wrong. So you wrote h bar omega is 2 epsilon 0 [INAUDIBLE] squared. Now there's a contribution that comes from the electric field and magnetic field because you have one factor of two. Then there's always that other factor of two. Are you getting that from using one half h bar because of the vacuum fluctuation. PROFESSOR: I'm not going back to the formula because I run the risk that it was wrong. But all I want to say is what I really mean is use Jackson. Put in a volume V-- an electromagnetic field-- with h bar omega energy. And the electric field squared of this photon, that's what I mean. And if you find a factor of two mistakes in my E square, I can still, you know, get out of theory exit by the rear-entrance door by saying that there is also a difference whether E square is E square RNS or whether E square is the amplitude. You know I mean there are risk factors of two everywhere. But what I mean is really the electric field caused by one photon. And of course, the argument stands. I don't need any factors of two or any subtleties of the electromagnetic field energy. We know that the energy is n plus 1/2 but emission is n plus 1. And these shows that the stimulation by the vacuum field cannot quantitatively account for spontaneous emission. AUDIENCE: So the quantity that you set equal to is h bar omega 1/2, not the fluctuation but the real-- PROFESSOR: OK, if you want to know, let's not compare apples with oranges. You want an electric field. And you can pick whether it's the RMS field or whether it is the maximum amplitude. You can pick what you want. But now we are comparing what is the e-square for the vacuum-- for single-mode-- vacuum. And what is the e-square for single photon? The two answers differ by a factor of 2. A single photon is twice as strong in e-square as the vacuum fluctuations in the same mode. That's what it means. Yes? AUDIENCE: I have a question about the quantum emission rate. The explanation that it had-- quantum mechanic derivation that we have, do people not know the formula, how to describe spontaneous emission [INAUDIBLE]? PROFESSOR: I think so. I have not gone deeply back into the story. But a lot of credit is given to Einstein. And as I mentioned last week that Einstein actually had spontaneous emission in his derivation for the Einstein A and B coefficient in this famous paper. And so he found that there must be spontaneous emission based on a thermodynamic argument. It's only spontaneous emission, which brings the internal population of an atom into equilibrium. So I think it is correct to say. AUDIENCE: Can you derive it from that stagnant condition of getting [INAUDIBLE]? PROFESSOR: That's what Einstein did. And the answer is, by comparison with the Planck law, you get an expression for the Einstein A and B coefficient. Now of course, you can go the other way around. You can see if you just use classical physics you would actually expect-- now it depends. If you use the Bohr model, you would expect that the electron is radiating and it was a mystery. How can you have an atom in the ground state, which is circling around a nucleolus, and not radiating at all? On the other hand, in quantum mechanics, we are not assuming that the atom is circulating. And we have an accelerated charge and then we have a time dependent charge distribution. We use the steady state wave function. So I'm not sure if there is maybe an argument, which would say there should be some spontaneous emission based on a purely classic argument. But this would not be the whole story because a classic argument would then deal with the difficulty. Why is there difference between n equals 1, which does not radiate in n equals 2, which radiates. So my understanding is that it is only the physics either through the perspective of Einstein by just using equilibration or our microscopic derivation using filed quantization, which allows us to understand the phenomenon of a spontaneous emission. Other questions? OK, then before we talk about some really cute and nice aspects of the fully quantised Hamiltonian, I want to spend a few minutes talking about degeneracy factors. I've already given you my opinion. You should not think in almost all situations about levels, which have a degeneracy. Just think about states. A state is a state, and it counts as one. And if you have a level which has triple degeneracy, well, it has three states. Just kind of count the states and look at the states. However, there are formula for which involves degeneracy factors. And just to remind you, when we had the discussion of Einstein's A and B coefficient, the Einstein A coefficient was proportionate to the B coefficient responsible for stimulated emission from the excited to the ground state. But the Einstein B coefficient for absorption was related to the Einstein B coefficient for stimulated emission by involving these degeneracy factors. So degeneracies appear and in some formal layer that it makes a lot of sense to use them. So I've always said for a fundamental understanding, you should just assume all degeneracies are one. This is how you can avoid, sort of, some baggage in deriving equations. And I'm still standing to my statement. I want to show you now a situation where it becomes useful to consider degeneracy factors. So let me give you an example. We can now look at the situation where we have an excited P state and a ground state, which is S. Or I can look at the opposite situation where we have an S state, which can radiate to a P state. Well by symmetry, the different p states and plus 1 and minus 1 m equals 0 are just connected by spatial rotations. So therefore, their lifetime of the 3 P states and the rate of spontaneous emission are the same. But if you now assume that you have absorption, you go from the S state to the P state. Then you find that the Einstein B coefficient there are now three possible ways. Not just one polarization or 3 polarization. And you will find that this is proportional to three times r. However, in this situation, it's a reverse but let me just finish here. So here the natural align rates and the rate of stimulated emission described by the coefficient from the excited state to the ground state is proportionate to R. Whereas, in the other situation, if you have absorption now, well, each of those levels, there's only one transition, one pass way. Therefore, you will find that the coefficient for absorption is proportionate to R. Whereas, gamma and the stimulated emission, which is now BSP, is proportionate to three R because there are three pathways. So depending what the situation is, you have to be careful. And you would say-- but if it's an S to P transition, it maybe connected by the same matrix element. And therefore, you would say shouldn't there be align strings, which is independent whether you go from S to P or P to S, which just describes in a natural way what is really the coupling between S and P state? And yes indeed, there is in the literature some definition of line strings where the lines strings S would be proportionate to the sum of all of the eights between an initial and the final state. And do sum over all. So therefore, when you use this formula for the line strings, whether you have the situation on the left side or on the right side, you will do always the sum over the 3 possible transitions. So the lines things is the same for both situations. It's just generic for an S to P transition. So if you use this definition but then you have the situation that spontaneous emission is always given by the line strings but you have to multiply now by the multiplicity of the excited state. If you have a P state, the whole line strings is distributed over three states. And each state has only a spontaneous emission rate, which is a third of what the line strings gives you. I don't want to beat it to death, because I hate degeneracy factors. But I just thought this example with the P to S and S to P transition tells you why they necessarily have to appear in derivations like Einstein's A and B coefficient. I hope there are no further questions about degeneracies. But you know, making this comment also allows me to say, well, when I derived the Einstein A coefficient-- what we did last class-- I did not use any degeneracy factors. Well, this is correct. Our derivation assumed that there was-- we assumed that there is only one final state. We did not include degeneracy factors. We also assumed that we had a dipole matrix element, which was along the z-axis. And so by those definitions, I have implicitly picked a geometry, which can be represented by that we have an exciting piece state in the m equals 0 state. And we have a pie transition with linear polarization to this s state. And by doing that, I did not have to account for any degeneracies. But in general, if you derive microscopically an equation for spontaneous emission, you may have to take into account that your excited state has different transitions-- sigma plus and sigma minus transitions-- to different states. And you have to be careful how you do the sum over all possible finer states. And this is where degeneracies would eventually matter? Questions? OK, so then lets go from P counting or accounting for the number of states to something, which is hopefully more exciting. We want to talk about the fully quantized Hamiltonian. So what we are working towards now and it may spill over into the Wednesday class is I want to give you the sort of paradigmatic example of cavity QED where an atom within an excited state is in an empty cavity. And now it can emit a photon into the mortification mode of the cavity. But these photon can be reabsorbed. So this is a phenomenon of vacuum Rabi oscillations. And so I want to set up the Hamiltonian and then the equation to demonstrate to you the vacuum Rabi oscillations. And for me, the vacuum Rabi oscillations are the demonstration, that spontaneous emission, has no randomness, no spontaneity, so to speak because you can observe coherent oscillation. A coherent time evolution of the whole system and which is possible only due to spontaneous emission. So let's go there. So just to make the connection, a few lectures ago, we had a semi classical Hamiltonian. This is when I wanted to show you that the two level electronic system can be mapped onto a spin one half system driven by magnetic field. So this was when we only looked at the stimulated term when we only did perturbation theory. And in that situation, we had the electronic excitation. And then we had the drive field, which was assumed to be purely classical like a rotating magnetic field which drives spin up spin down transitions magnetically. And we concluded that, yes, if you use a laser field, it does exactly the same to a two level atom what a magnetic field does to spin up spin down. But now we are one step further. We've quantized the electromagnetic field. And we have spontaneous emission. And this is something, for reasons I just mentioned, you will never find in spin up spin down because it will take 1,000 years for spontaneous emission to happen. So now we want to actually go beyond this semi classical picture, which is fully analogous to the precession and rotation of the spin in a magnetic field. And we want to add spontaneous emission. So what we had here is the Rabi frequency was a matrix element-- the dipole matrix element-- times a classic electric field. And we want to replace that now by the electric field at the position of the atom. But we want to use the fully quantized version of the electric field. And it also becomes useful to look at the sigma x operator, which actually has two matrix elements of [INAUDIBLE], which connects ground excited and excited ground state. And one of them is going from the excited to the ground state. So this is, sort of, lowering the energy to sigma minus operator. And the other one will be a raising operator. It raises the excitation of the atom. And we will refer to it as sigma plus. So the electric field is replaced by the operator obtained from the fully quantized picture. Here we have the prefactor, which is the electric field of a single photon or half a photon, whatever. But it's factors of 2 r square over 2. We have the polarization. And now if you would take the previous result and would look at it. Well we want to go to the Schrodinger picture. And I mentioned that in the Schrodinger picture the operators are time independent. So we cancelled the e to the i omega t term. If you would go to the result we had last week and would simply get rid of the e to the i omega t term, you would now find operators a and dagger. But they would have factors of i in front of it. That's a equation we had when we derived it. Well I prefer note to use something which looks nicer. Just use a and a dagger. And you can obtain it by shifting the origin of time. So we're not looking e to the i omega t or t equals 0. We wait a quarter period into e to the i omega t just gives us factors of i, which conveniently cancel the other factors of i. So what I'm doing is just for convenience. And let me write down that this is in the Schrodinger picture. OK. So we want to absorb all constant by in one constant now, which is the single photon Rabi frequency. We have the type or matrix element of the atom. There's a dot product with the polarization of the light. And then we have the electric field amplitude of a single photon. h bar omega over 2 epsilon 0 v. So this is what appears in the coupling. And we want to write it s h bar omega 1 over 2. And this omega 1 is the single photon Rabi frequency. And with that, we have now a Hamiltonian, which is really a classic Hamiltonian, written down in the standard form. It has the excitation energy times the sigma z matrix. It has the single photon Rabi frequency. The single photon Rabi frequency appears. You know, this is the single photon Rabi frequency. But then the operator for the electric field, after getting rid off the i's, is simply h plus h dagger. h plus h dagger. So this takes care of the photon field. And the operator which acts on the atoms are the raising and lowering operator sigma plus and sigma minus. And finally, we have the Hamiltonian, which describes the photon field which is h bar omega times a dagger a the photon number operator. Any questions? Yes? AUDIENCE: [INAUDIBLE]? PROFESSOR: I mean, we are looking at the interaction with an atom, which is at rest at the origin. Therefore, e to the ikr is 0. We will only consider the spatial dependence e to the ikr when we allow the atom to move. As long as the atom is stationary for convenience, we put the atom at i equals 0. But in 8.422 when we talk about light forces and laser cooling, then it becomes essential to allow the photon to move. And this is actually where the recoil and the light forces come into play. But as long as we're not interested in light forces, only in the internal dynamics-- calm and excited state-- we can conveniently neglect our spatial dependencies. Other questions? So this is really a famous Hamiltonian. And you also see how natural the definition of the single-photon Rabi frequency. So we have one half h bar omega for the diagonal sigma z matrix. This is the atomic excitation. This is the unperturbed Hamiltonian of the atom. This is the unperturbed Hamiltonian of the photon. And now the two are coupled. And the coupling is a product of an operator acting on the photon field plus minus one photon. And the other one is an operator acting on the atoms. And it is plus, minus, and atomic excitation. So let me just remind you of that. The sigma plus and sigma minus operator. The sigma plus is the atomic raising operator, which takes a ground to the excited state. And the sigma minus operator is the atomic lowering operator, which takes the atom from the excited to the ground state. So this is our Hamiltonian. And to hear about space on which this Hamiltonian acts is the product space of the atom. Direct product of the states of the light. Or in other words, the basis state would be that we use for the atoms. The states which have zero or one quantum of excitation. So we use excited state or ground state. And for the photon, we can just use the Fock states where the occupation number is n. Questions about that? So it's a very-- just look at it with some enjoyment for a few seconds. I mean, this is a Hamiltonian, which has just a few terms. But what is behind it is, of course, a power of all the definitions. I mean, each symbol has so much meaning. But in the end, by having this formalism of operators quantized electromagnetic field. We can write down-- we can catch many, many aspects or we can, pretty much, fully describe how a two level system interacts a quantized electromagnetic field with that set of equations. Of course, the fact is not that everything is so simple. The fact is that we have, by understanding the physics, we have skillfully made definitions, which allow us to write everything down in this compact form. So often something is simple to write down. But if there's a lot of physics insight, we spend some time in discussing it. And the first thing I want to just point out and discuss is this interaction term. We have the product of sigma plus and sigma minus with a and a dagger. So what we have here is we have an interaction term. And this interaction part has actually four terms in a very natural way. Well, let me just write them down. It's sigma plus with a. Sigma minus with a dagger. Sigma plus with a dagger. And sigma minus with a. OK, so let's discuss those. Sigma plus with a dagger. Sigma plus is actually an absorption process. a reduces the photo number by one, and increases the atomic excitation from the column to the excited state. The other term looks naturally, intuitively like emission. The a dagger operator takes us from n to n plus 1. And sigma minus takes us from the excited state to the current state. So these are the two terms, which we would call intuitive terms because they make sense. The other terms are somewhat more tricky. Sigma plus and a dagger means we create a photon and we create an excitation. So in other words, it's not that, like the other term, quantum of excitation disappears from the field, appears in the atom, and vice versa. Sigma plus a dagger means we have an atom excitation takes us from the ground to the excited state. Plus, we emit a photon at the same time. And sigma minus a dagger means that we go from the excited to the ground state. So we have an atom d excitation. And I would say, well, if the atom is d excited it should emit a photon. But instead, the photon disappears. So we have those processes. The last two are sometimes referred to in the theoretical literature. They are off shell. Under shell is energy conservation. Off shell means they cannot conserve energy. But nevertheless, these are terms which appear in the operator. But you should be used to if you have often terms in the operator which cannot drive a resonant transition. When you looked at the DC stock effect or when we looked at the AC stock effect for low frequency photons, those low frequency photons cannot excite an atom to the excited state. So they are not causing a transition, but they led to energy shifts in second order perturbation theory. So therefore, those terms this language now cannot drive transitions. They can only drive transitions to virtual states, which would mean they can only appear in second order perturbation theory that you go up to a so-called virtual state but you immediately go down. And those terms give only rise to shifts. No transitions because you couldn't conserve energy in the transition. But you can do shifts in second order. And one example, which we discussed in the clicker question is that those shifts are actually lamb shifts. And in other places, especially in the context of microwave fields, they are called Bloch-Siegert shifts And let's just look at one specific state. And this is the simplest of all. We have the vacuum no photons. And the atom is in the ground state. If you look at the four possibilities of the interaction term, there is only one non vanishing term. The photon is at the bottom off all possible states. The atom is at the bottom of the possible states. So when we act with the four terms on it, the only term which contributes is where those is where those are raised because all the others are 0. The only non vanishing term is where we create a virtual atomic excitation and also a virtual excitation of the photon field. And we know that when we have an atom in the ground state in the vacuum that the only manifestation of the electromagnetic field is, of course, not spontaneous emission but the lamb shift. So therefore, if you would apply this operator to the bound state of an electron in an atom, the complicated 1s wave function of hydrogen and sum this operator over all modes of the electromagnetic field. Then you would have done a first principle QED calculation of the lamb shift. I'm not doing it but you should understand that this operator-- sigma plus a dagger-- is you operator for the Lamb shift. Questions? Yes? AUDIENCE: [INAUDIBLE]? PROFESSOR: Oh, no, everything is. If you have a two level system, this Hamiltonian captures everything which appears in nature if you have a two level system interacting with the electromagnetic field. That's it. A radiation reaction is just something we can pull out of here. Stimulated emission we can pull out of here. The way how vacuum fluctuations create a lamb shift or the way how vacuum fluctuations affect an atom in the excited state, everything is included in here. The question is just can we solve it. And the calculations can get involved. But this is the full QED Hamiltonian for a two level system. That's a full picture. I mean, that's why I sort of said before be proud of it. You understand the full picture of how two level systems interact with electromagnetic radiation. The only complication is, yes, if you put more levels into it and such and things can get richer and richer. And-- yes, we have also made the dipole approximation, which we're just wondering how critical it is. Well, we use the electric field a and a dagger, but my gut feeling is it doesn't really matter what we have. Here is the most generic term, which can create and annihilate photons, and we have the a and a dagger term. Actually, I don't know what would happen if you don't make the dipole approximation. Well, if you have two levels which are coupled by magnetic dipole, then you have the same situation. It is just your prefactor, the semi photon Rabi frequency, is now alpha times smaller because of the smaller dipole matrix element. So I think you can pick, pretty much, any level you want. And this is why I actually discussed matrix elements at the beginning of the unit. For, pretty much, all of the discussion you're going to have, it doesn't really matter what kind of transition you have as long as the transition creates or annihilates a photon. And all the physics of the multiplicity of the transition, magnetic, dipole, electric, quadrupole, or whatever just defines what this the semi photon Rabi frequency is. You've put me on the spot, but the only thing which comes to my mind now is if you would formulate QED not in the dipole approximation but through with the p minus a formulation. Then we have an a-square term. And then we have the possibility that one transition can emit two photons. So that's not included here. AUDIENCE: So that's higher-- PROFESSOR: This would be something higher order. On the other hand, we can shoulder the canonical transformation that the p minus a formalization with the a-square term is equivalent to dipole approximation. So the question whether you have a transition which emits two photons simultaneously or two photons sequentially eventually by going through an immediate state, this is not a fundamental distinction. You can have one description of your quantum system via two photons automated in one transition. You have another description of your quantum system where photons cannot-- only one photon can be emitted. And then you have to lend an intermediate state. And you would say, well, either two photons at once or one photon at a time. This is two different kinds of physics. But we can show that the two pictures are connected with economical transformation. So therefore, you have two descriptions here. But anyway, I'm going a little bit beyond my knowledge. I'm just telling you bits and pieces I know. But this Hamiltonian is either generally exact. I just don't know how to prove it. But it really captures in all of the QED aspects of the system we want to get into. So OK. So in many situations we may decide that the off shell terms of the interaction just create level shifts, Lamb shifts, Bloch-Siegert shifts. And we may simply absorb those lamb shifts in our atomic energy levels, omega e and omega g. So therefore, for the dynamic of the system, if you include all of those lamb shifts in the atomic description, you do not need those off shell counter intuitive terms. These are actually also the counter-rotating terms in the semi classical approximation. We only keep the intuitive terms. And that's called, again, the rotating wave approximation. Just to remind you, we do not have rotating waves here. Everything is operators. But the same kind of physics-- co- and counter-rotating-- appears here that we have four terms. Two are the fully quantized version of the co-rotating terms. And the other two-- the off shell terms-- are the quantized version of the counter rotating term. So therefore, if you neglect those two off shell terms, we have now the fully quantized Hamiltonian in the rotating wave approximation. So let me just write it down because it's also a beautiful line. We have the electronic system. We have the interaction Hamiltonian, which has now owned the two terms. When we raise the atomic excitation, we lower the photon excitation and vice versa. And we have the Hamiltonian for the photon field a dagger a. And this is apart from those lamb shift terms. The full QED description of the system. And if we only consider one mode-- here, of course, in general, the general Hamiltonian has to be sent over a modes. And then you'll get spontaneous emission and everything we want. But if you have a situation where you only look at one single mode, then you have what is called the famous chains Cummings model And very important result of this James Cummings model are the vacuum Rabi oscillations, which I want to discuss now. OK. So let me just-- it's called James Cummings Model. So let me describe to you why it is a model. Well it assumes a two level system, which we find a lot of candidates among the atoms we want. Sure our atoms have hyperfine states. But we can always select a situation where, essentially, we only couple two states. We can prepare initial state by optical pumping, and then use circularly polarized slide on a cycling transition. And this is how we prepare in the laboratory a two level system. So that's one assumption of this model with a two level system. But the second assumption is that the atom only interacts with a similar mode. And that requires a little bit of engineering because it means we need a cavity. So let me just set up the system. So our laboratory is a big box of volume v. And this is where we maybe quantize electromagnetic field to calculate spontaneous emission. And our atom here may actually decay with the rate gamma, which is given by the Einstein A coefficient. And in order to describe this spontaneous emission, be quantized electromagnetic field in the large volume v. But now we have a cavity with two mirrors. And those two mirrors define one mode of the electromagnetic field, which will be in resonance on your resonance with the atom. Well there will be some losses out of the cavity, which eventually coupe the electromagnetic mode inside the cavity to the other awards modes in the speaker volume v. And this is described by a cavity damping constant kappa. What is also important is when we use cavity to single out one mode of the electromagnetic field, the cavity volume is v prime. And we often make it very small by putting the atoms in the cavity where the mirror spacing is extremely small. OK. We know, and I'm not writing it down again, what the Einstein A coefficient is. The Rabi frequency-- the single photon Rabi frequency-- which couples the atom to the one mode of the cavity has this important perfecter, which was or is the electric field of one photon in the cavity. And importantly, it involves the electric field of the photon in the cavity value, which is B prime. So now in addition to using, you know-- now you see what our experimental handle is. If you make this volume very small, then we can enter this strong coupling regime where the single photon Rabi frequency for this one mode selected by the cavity becomes much larger then the spontaneous emission into all the many other modes. So the interaction with one mode due to the cavity and the smallness of the volume is, sort of, outperforming all these many, many modes of the surroundings. And that would mean that an atom in an excited state is more likely to emit into the mode between the two cavity mirrors than to any other modes to the side. Secondly, of course, when the photon has been emitted into the cavity, the photon can still couple to the other modes by cavity losses kappa. And now we assume that we have such high reflectivity mirrors that kappa is smaller that the single photon Rabi frequency. And this is called the strong coupling regime of cavity QED. So then we can at least observe for a limited time the interplay between a single mode of the cavity and a two level system. And this is a James Cummings model. The James Cummings model. So in that situation, the Hamiltonian, the fully quantized Hamiltonian, and the QED Hamiltonian couples only pairs of states which we label those states the manifold n. So we have an excited state with n photons. And it is coupled to the ground state with one more photon. Our Hamiltonian has two coupling terms. Remember the other tool where you clicked it in the rotating wave approximation and we can go from left to right with sigma minus a data plus. And we can go from right to left with the operator sigma plus and the annihilation of the [INAUDIBLE] a. So as long as we have a detuning delta, which is relatively small. As long as detuning is small, the rotating wave approximation is excellent. So let me just conclude by writing down the Hamiltonian for the situation I just discussed. And then we'll discuss the Hamiltonian on Wednesday. So if this is energy, we have two levels. The excited state with n photons, the ground state with n plus 1 photons. If the photons are on resonance, the two levels are degenerate. But if you have a detuning delta, the two levels are split by delta. And what we are doing right now is for the [INAUDIBLE] the Hamiltonian, we shift the origin so the zero of the energy is just halfway between those two states. That's natural. So this avoids just off sets in our equations. So our Hamiltonian has now the splitting of plus minus delta over two. The coupling has the perfecter, which is the single photon Rabi frequency. And then the a and a dagger terms depends on n square root n plus 1. So what I wrote down now is the Hamiltonian rotating wave approximation, which interacts, which describes only one pair of states. But we have sort of a cause in our Hilbert space. One pair of states for each label n. But each of them is, sort of, described by the decoupled Hamiltonian. So that's what I wanted to present you today. And I will show you on Wednesday how this Hamiltonian needs to Rabi oscillations not induced by an external field but induced by the vacuum. Any questions?
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
9_Atoms_V_and_Atoms_in_External_Fields_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let's get started. We want to wrap up today our discussion of atoms without external field, and then discuss what happens when we put atoms into external magnetic fields. Last class, we talked about isotope shifts, and there was a question, how big are those isotopes for some of your favorite atoms? I just looked up some information on lithium, which is a light atom, and this paper here shows calculations compared with experiments. The isotope shift between lithium six and lithium seven due to the mass, lithium six to lithium seven, is about 10 gigahertz. The volume effect is only two megahertz, 1,000 times smaller. However, the precision of experiments is such that if you find an isotope shift, the mass effect can be exactly calculated from the atomic masses, you can still get information about the size of the atomic nucleus out of it. So this is the example of a light atom, 10 gigahertz mass effect, two megahertz volume effect. And here is a rubidium atom. The isotope shift of the D1 line between 85 and 87 is 77 megahertz only. The mass shift is 56 megahertz, and the remainder is mainly the volume effect. There's a specific mass effect due to electronic correlations if you calculate electrons, which I think is small here but I don't want to discuss it. We can now compare the mass effect in rubidium versus lithium. First of all, the shift compared to infinite mass. the reduced mass effect, is much bigger in lithium. It's 20 times bigger in lithium than in rubidium. However, when you go from lithium six to lithium seven, the mass changes by 15%, so the delta m over m is also much larger for lithium than rubidium. And if you add those two factors, you find that the mass effect is 200 times larger in lithium, 10 gigahertz versus 50 megahertz. The nuclear volume effect is two megahertz for lithium, 20 megahertz for rubidium, so that sets the scale. Any questions about that? Let me come back to one other question which we discussed last class, and this was the question about if you have a deformation of a nucleus, or if you have any kind of isotropic shape of an object, what is the minimum angular momentum in order to observe it? I know a lot of you got confused about it, so I want to discuss the same thing again, but now focusing on two different frames, the lab frame and the body-fixed frame. I hope you find this discussion insightful. So let's assume we have an object, it can be a molecule. Actually, you have a homework assignment whether you can observe a permanent dipole moment of a molecule, and this will lead you into a discussion of lab frame versus body-fixed frame. But let's assume we have an object, assume it's a nucleus, which has a really odd shape. However, if it has angular momentum of zero, all you have is one level. If you have an angular momentum of 1/2, you have two levels, and you can now define in the laboratory that the energy difference is due to the magnetic dipole moment, for instance, or the electric dipole moment if you put in electric field. If you have I equals 1, you can have three levels, E1, E0, E minus 1. Let's say you have put the atoms into an electric field gradient. You can then ask if E0 is in the middle between up and down, plus 1 or minus 1, or whether it's displaced or not. And depending whether this is larger or smaller than zero, you would say there is a quadrupole moment which is larger or smaller than zero. In other words, what I'm telling you is if you have only one level, I equals 0, you can't say anything about the shape of the object. If you have two levels, you can determine a dipole moment. If you have three levels, by the deviation from the equidistance, you can find a higher order moment. But now comes the point. You can now take two positions. You can say that for low I, the deformation of a magnetic moment for I equals 0 cannot be measured. Therefore, it is zero. Or you can say deformation exists, but only not in the lab frame but in the body-fixed frame. So in other words, you would say that the deformation exists always. It's just a measurement problem. At low angular momentum, I cannot measure. Or the other statement would be, well, if you can't measure it, [INAUDIBLE], it means it doesn't exist. So which statement or which conclusion is correct? This is the body-fixed frame argument, and this is more based on the lab frame. Well, I would say the lab frame argument is always correct because you can't measure it, you can't determine it in the lab, but let me go from there. Let's assume we have an object, and we think it has a deformation but it has a low angular momentum phase. Can we still say that in its body-fixed frame, we have an object which has a deformation or not? Now, my personal opinion is the following. It really depends. If you have a system where you can have angular momentum without changing the internal structure. For instance, I have this stick, and I can just spin it up, and if I can spin it up, I can get an angular momentum wave packet, I can orient it and measure its deformation. Then I think I would say, even if this state has zero angular momentum, it has a deformation, and the way I know it because if I add angular momentum, I can measure the deformation. I cannot measure it at low angular momentum. So then you would say in the body-fixed system, there is a deformation, but it only manifests itself in the lab frame if I add angular momentum. However, you may have an object, let's say a molecule, which is so weakly bound that one quantum of angular momentum due to centrifugal forces rips it apart. That exists. Extremely bound state which cannot provide enough binding force to withstand even one unit of angular momentum. So that's an object which you cannot rotate, you cannot transfer any angular momentum without destroying it, without ripping it apart. And now to say that this molecule has a dipole moment or has some anisotropy, you're making a statement which cannot be tested at all. So at that point, you should rather say, what matters is what I can measure in the lab, and I will never be able to measure any deformation in the lab. So the first example with the stick will apply to a very stable molecule which may have a dipole moment, and you assume it has the same dipole moment whether it's rotating or not. And then you would say at zero angular momentum, I cannot measure the dipole moment but I know it exists in the body-fixed frame. If you have a nucleus, and you have sort of a wave function of the protons and neutrons, and it's I equals 0, you will not find any moment. At least for the ground state of nuclei, when you add angular momentum, you really change the internal structure. You have to promote nucleons to higher orbits, so you cannot add angular momentum and still have the same object. So therefore, you have to say the ground state with I equals 0 has no deformation, because there is no way to ever find [INAUDIBLE] deformation, so it doesn't exist. But there are excited states of nuclei which have a deformation, and these deformed nuclei can be put into a multiplex of angular momentum states. So then you would see, I have the same kind of nucleus, but at different angular momentum states. At higher angular momentum states, you can define [INAUDIBLE] determine quadrupole moments and things like this. And then you may say the same internal state has now a non-rotating state, and you would still be tempted, and you are correct with that, to associate a deformation even in the non-rotating state. I hope those remarks help you reconcile the two aspects whether you have an object which is stable enough to be spun up, and then I think you can always talk about the body-fixed frame. But if you have an object where you change the internal structure when you add angular momentum, I think, for fundamental reasons, you cannot associate any deformation with it. Any questions about that? Let me just write down the summary. The definition of a deformation in a body-fixed frame makes sense only if you can add or change angular momentum without significantly changing the internal structure. Questions? I think the computer is set up. I cannot go backward in Presentation mode. What I want to discuss next is give you a little bit of an historic summary how spectroscopy of hydrogen is developed, in particular also focusing on one important discovery, the discovery of QED through the Lamb shift. Of course, you all love hydrogen because it's the simplest atom but it has so much interesting physics in it. I've summarized for you here some papers on hydrogen, and I used them to illustrate several points. I will show you that actually, the discovery of the Lamb shift had precursors. 10 years before the Lamb shift was discovered, people had even some idea that something may be wrong with the understanding of the structure of hydrogen. So you can say they came so close, people 10 years before, in realizing the Lamb shift, there were people who maybe missed the Nobel prize by just a tiny little bit. They had all the insight that there may be QED correction in hydrogen. They just didn't have the technology to measure it accurately enough. The second example I want to show is that we always talk about fundamental limitations, but fundamental limitations can disappear in time because they may not be as fundamental as they appear. So for instance, there were limitations with the limitations of the Lamb shift because you had a short lifetime of p states, but with the event of two photon transition, you can go from s to s and s to d states, and therefore map out Lamb shifts with much, much higher precision not limited by the finite lifetime of p states. Finally, you would say Lamb shifts are small splittings, and for many, many years, Lamb shifts were measured by making radio frequency transitions between two s and two p states. Well, today, the most accurate measurement of the Lamb shift is with an optical transition where you need a much, much higher relative precision to see the tiny Lamb shift. But optical metrology with direct frequency measurements and frequency combs has so much improved in precision that now, an optical measurement, even if it comes to a tiny difference, is more accurate than a direct [INAUDIBLE]. So I think the history of hydrogen shows you that technology can completely change the paradigm how measurements are made. Fundamental limits disappear because new tools or new insight is available. And also, I find it interesting that discoveries often have precursors, and people have a hunch, know about it, and then finally, it is discovered. Let me just take you through some historic papers. This paper is 1933, 15 years before the Lamb shift, and it says that one possible explanation for some discrepancy of the structure of the Balmer lines is that the effect of the interaction between the radiation field and the atom has been neglected. That's QED. You cannot just calculate the structure of the hydrogen atom from the Coulomb field. You have to allow the radiation field, all the modes of the vacuum to be included. So this insight is not due to Lamb It was there already, 1933. Same year, look at the title, "On the Breakdown of the Coulomb Law for the Hydrogen Atom." People speculated or discussed that the Coulomb law will not be valid at very small distances. This is ultimately what QED, raided correction, the Lamb shift, vacuum polarization all is about. Finally, people had an understanding of the hydrogen atom, and they measure-- I want you to keep that in mind-- they did optical. They measured the Balmer lines of hydrogen and deuterium, and they couldn't fully resolve it because of the finite lifetime of the peak state, but there was some hunch when you try to get the envelope from the underlying structure that there was a discrepancy. It was just not significant enough to say for sure, there is an additional line shift which is not accounted for by theory. There was a discussion that there is a deviation of the Coulomb law, but here is the insight. As was indicated by previous authors, the interaction required to change the Coulomb law at small distances is much too large to be accounted for by the assumption of a finite size of electron and proton. So the Coulomb field has to be modified at short distances in a much stronger way than just the finite size of the proton. We'll talk about the finite size of the proton in a minute. And then, of course, 1947, UIF oscillators have been developed in the pursuit of radar, experimental tools are there now-- high power IF sources, cumulative sources, and such, and then Lamb and Retherford in his landmark paper look at the fine structure of the hydrogen atom, and this is the famous result. They measured transitions as a function of magnetic field, and you see the solid line, which I think was the theory without the Lamb shift. The dashed line is the hyper fine structure of hydrogen, and the lines converge, and the difference is 1,000 megacycles, the first determination of the Lamb shift. So this was 1947. It's interesting that it's just one or two weeks later, there is a theoretical paper by Hans Bethe providing an explanation for the Lamb shift, so already coming up with the first model how to account for QED. I didn't look it up in detail, but I thought the spirit was similar to what I presented you in class, that the electromagnetic zero point energy is shaping the electron and leading to corrections. But it's amazing that within weeks, theorists figured nearly out, yes, this is the explanation. This is how we have to explain the theory. For a number of years, people pursued measurements of the Lamb shifts with higher and higher accuracy. I just like the last sentence here. This is now the next paper by Willis Lamb and Retherford, and they are sort of saying that they wanted to measure the Lamb shift with higher precision, but then they said, "the program was large and encountered unexpected difficulties which required much more time to surmount. As a result, the paper promised two years ago was delayed." I think this applies to many, many papers to be written, but here, the authors even say that upfront, it took us two years longer to do the research than we initially anticipated. You see now the growing accuracy. We have the Lamb shift, which is on the order of 1,000 megacycles, one gigahertz, and the precision is now in the 100 kilohertz range. This was the technology of the original discovery. Then there was a next generation of experiments on the Lamb shift using separated [? oscilloatomic ?] fields, [INAUDIBLE] techniques. We'll talk about that later in the course. And with these techniques, the accuracy of the Lamb shift is now one digit further in the 10 kilohertz region. There is a nice feature that we'll also discuss later, that it was possible to obtain line weights, which is [INAUDIBLE]. We'll talk about it later, but it's possible to do spectroscopy on unstable states, which provides line widths narrower than the actual line widths. If you want to get one sentence as an appetizer, you just look at the atoms which have not decayed for a long time, and if you play some tricks, you can then get line widths which covers points to several lifetimes, and not just to the one [INAUDIBLE] lifetime. Some conditions have to be met, and those authors used it here to advantage to narrow the line for the measurement of the Lamb shift. Yes? AUDIENCE: The abstract says that the result is not in good agreement with theory. What is the theoretical-- how far had it gone? PROFESSOR: I don't know it at this point. I will later give you comparison with theory which is highly accurate. I assume at this point-- I'm not sure if it was instrumental difficulty. I didn't [INAUDIBLE]. So these are the same authors just a few years later. The agreement between theory and experiment is two standard deviations, so I think this problem disappeared. I don't know if it was the fault of theory or experiment. But now we go a step forward to optical spectroscopy. You remember originally, and this is why the Lamb shift was not jumping into people's eye when they did spectroscopy of the Balmer spectrum of hydrogen, you cannot resolve the structure. The Lamb shift is there, but it's hidden in the envelope of the unresolved lines. And now the advent of lasers, and people immediately developed saturation spectroscopy. That's how most of our laboratories stabilize lasers using Doppler-free saturation spectroscopy. And when saturation spectroscopy was invented by Hansch and Schawlow for the first time, you can break through the Doppler [INAUDIBLE], and now you see the lines resolved. And here, I think for the first time, you see two peaks separated and the splitting is the Lamb shift, which until then was only accessible through [INAUDIBLE] frequency methods. Of course, these were the first lasers, just pulsed lasers, and we couldn't even think about precision. But then, of course, using metrology, using frequency chains, people could do precision measurements in the optical domain. These are now papers in the '90s, "Optical Measurement on the Lamb Shift in the Ground State or in the Excited State." Talking about the comparison with experiment and theory, experimental is pushed to higher and higher precision, and suddenly, there was a discrepancy, and it was a discrepancy in the 1s Lamb shift. In the 1s state, the Lamb shift is much bigger than in the 2s state because the electron interacts much more intimately with the [INAUDIBLE] than the Coulomb potential is on this. People found that the experiment did no longer agree with theory. But then the theorists had to check all their assumptions, and it was found that there were two new binding corrections which were surprisingly large. Often, you make an estimate that those terms are small. You say it's higher order, but you may not know the pre-factor. And here, something was surprisingly large, and by now you're proving the theory. There was again agreement between experiment and theory. It's getting now down to the kilohertz level. And at least as a few years ago, this was state of the art. Remember the Lamb shift is about 1,000 cycles one gigahertz, and now the precision is in the single kilohertz. As I pointed out, precision was reached by directly measuring the frequency of laser, frequency metrology. This was actually, for historic interest, frequency metrology, they used peak nodes between the laser used to measure the hydrogen line and some other lasers. This is just a few years before comb generators. Frequency combs completely changed things again. But they had already the precision of a direct frequency measurement. You can read about it when I paused it. It shows you what the [INAUDIBLE] is in those installations. This is a slide I borrowed from Ted Hansch, "Optical Spectroscopy of Hydrogen." It just shows the advances in frequency metrology. It shows how caesium clocks and optical spectroscopy, how they have changed in precision, and eventually, we are now past the time where optical spectroscopy is more accurate than microwave and radio frequency spectroscopy. Optical clocks are more accurate than [INAUDIBLE] frequency standards. I'm sure in your lifetime, you will experience the redefinition of this again because the caesium clock is no longer accurate enough compared with the most precise optical measurements. And I think there was a peak gap, a change in slope here. In the old days, you measured the wavelengths of light by making a measurement of the wavelengths using maybe a grading or interferometry, but when you now started to measure frequencies directly, then there was a change in slope and major improvements in precision. And today, of course, the most precise measurement of laser frequencies is not through the wavelengths, that they can count the number of cycles in a peak node with an optical comb generator. So what can you do with ever increasing precision? This is also a slide I borrowed from Ted Hansch. If you measure very, very accurately an atomic line, let's say the 1s 2s transition in hydrogen using two photon spectroscopy, what you can do is you can measure it, and a few years later, you can measure it again. And now it becomes an interesting question if you have this precision. Will the result be the same as a function of time? If there were a small change, which there wasn't, you could only come to one conclusion, and this is fundamental common sense in nature, change is a function of time. So this precision of metrology is now being used to test the fundamental constants, again, constant as a function of time. Precision is even more improving. The latest in the development in the spectroscopy of hydrogen is what is called the size of the proton atom. In your homework assignment, you are actually calculating what is the correction due to transition frequencies in hydrogen because you don't have a proton field of a point particle. The proton has a finite size. Or vice versa, if you have sufficient prediction, if you have sufficient accuracy of the measurement, you can determine the size of the proton from the measured transition frequencies. This was done, and in 2010, there was a big surprise that the size of the proton determined from hydrogen spectroscopy did not agree with scattering measurements where you scatter electrons and protons to measure the proton size. This is still a puzzle. It's called the proton radius puzzle, and it is not clear what is causing it. What happened is in 2010, there was a big improvement in measuring the size of the proton, and this was done by replacing the electron with a muon, which is a heavy electron, but since the muon is so much heavier, the bore orbit of the muon, the negative particle going around the proton, it is much smaller. Therefore, the Lamb shift also, but also the correction due to the finite size of the proton, is much, much larger because there is much more overlay of the muonic wave function with a proton than for an electron. So there was a huge improvement in the precision of the measurement of the proton size, and this has really led to what's called the proton radius puzzle. It's not sure if that is at the same level of the Lamb shift, which gave rise to fundamental new physics. Maybe this is the discovery of the new Lamb shift in 2010, and it changes our understanding of fundamental physics, but maybe it's something else. The answer is not [INAUDIBLE]. At least this is 2010. A few years of checking the theory and checking the experiment has not removed the discrepancy. Rather to the contrary. It has hardened that there is some discrepancy which needs to be resolved. So this was just a little shot of excursion, a little bit of summary of spectroscopy of hydrogen over 80 years, from precursors to the Lamb shift to the proton radius puzzle. AUDIENCE: So I know the same group was using the same technique to measure deuterium, maybe helium PROFESSOR: I think they want to do it but they haven't done it. That's what they plan to do. AUDIENCE: That was my question. PROFESSOR: So our next topic are now atoms in external magnetic fields. The first chapter is on fine structure and the lambda g factor. But maybe more colloquially, atoms in external fields means that we add one more vector to the mix. In fine structure, we have orbital angular momentum and spin angular momentum, and we discussed how spin orbit coupling eventually couples L and S to J and so on, but now, we extend the game by one more vector, B, an external magnetic field. It really becomes a player in the game because you know that if you have spin orbit coupling, we use the vector model that L and S couple and precess around the axis of J, the total angular momentum. So the game we play when we couple angular momentum is that angular momentum couple, they precess around an axis that involves some quantum numbers and so on. But now, you can add a new quantization axis with a magnetic field, and then angular momenta will precess around the magnetic field. And there may be actually a conflict, that for strong magnetic field, the precession is different than in weak magnetic fields, and this is what we want to discuss now. So it is the game of L, S, and B. So what we are adding with magnetic fields is we are adding one term to the Hamiltonian, which is the Zeeman term, where we have an external magnetic field and we couple with a magnetic moment. The one thing which makes it interesting when we discuss fine structure is the following, that we have actually two components to the angular momentum of the atom, which is the spin and the orbital angular momentum, and the two have different g factors. So it's not that the magnetic moment is just proportional to the angular momentum. If the angular momentum comes from spin, it has a different weight than when the angular momentum comes from orbital angular momentum, and this is what we want to now understand. We want to determine what is the magnetic moment of the atom when the angular momentum has two different sources? What we will find is we will find that there is a lambda g factor which is sometimes 2, which is sometimes 1, or which is sometimes somewhere in between, depending how S and L are arranged with respect to each other. So our Hamiltonian is the Hamiltonian for the atom. We know that we have fine structure, we discussed that, which couples L and S. And now have a magnetic moment due to the spin and due to the orbital angular momentum, and then couples to the external magnetic field. In other words, if we had no fine structure coupling, if S and L would not be coupled, the answer would be very simple. S just couples to the magnetic field, gives the same [INAUDIBLE], the g factor of 2, and L couples to the magnetic field with a g factor of 1. But now the two are coupled with respect to each other, and if you have L-S coupling, the projection of S and L on the z-axis is not a good quantum number anymore. So therefore, we have two different terms which are diagonal in two different bases, and that's what we want to discuss. So the g factor of orbital angular momentum is 1. The g factor of the spin is 2, or if you want to include the leading correction here from QED, it's the fine structure constant over 2 pi. We did discuss that the fine structure can be related as the Zeeman energy of the spin in a magnetic field which is created by the electron due to its motion. Or, if you take the frame of the electron, the electron sees the proton orbiting around that creates a magnetic field, and this magnetic field couples to the state. So therefore, we can associate fine structure with an internal magnetic field inside the atom. And this internal magnetic field is rather large. It's on the order of 1 Tesla. So therefore, for our discussion of the lambda g factor and fine structure in applied magnetic fields, we will assume that we are in the weak field limit where the fine structure term, the first term, is much larger than the Zeeman term. Of course, if you use very strong magnets, you can go to the high field case, but I will discuss explicitly the transition from weak field to high field for hyperfine structure, and the phenomenon for fine structure is completely analogous. It just happens at much higher fields. So anyway, I will discuss the high field case and transition with the high field case with a much more elegant example of hyperfine structure that you can immediately apply to fine structure if you like. If I want to solve the problem, calculate the lambda g factor, I could directly calculate just one matrix element and it would be done. So all I want to know is what is the Zeeman energy because Zeeman energy divided by the magnetic field is the magnetic moment. And since I assume that I'm in the weak field limit, I can simply use the quantum numbers S, L, and S and L coupled to J, and the magnetic quantum number is nJ. So by simply calculating this expectation value, I'm done and I've solved the problem. However, I want to do the derivation using the vector model because it provides some additional insight. So in the vector model, we have L and S coupled to J to the total angular momentum. And in the vector model, we assume that L and S rapidly precess around J. And therefore, the only thing which matters are the projections. Only the projections of L and S on the J-axis are important. So you can say if you have an rapid precession of L and S around J, the transverse components rapidly average out and [INAUDIBLE]. So therefore, our Zeeman Hamiltonian has to be rewritten in the following way. The Zeeman Hamiltonian was the magnetic moment times the external magnetic field with a minus sign. But what matters is the projection on the J direction, so we do the projection in this way. And also, in the end, what matters is, since the magnetic moments are aligned with J, it is now the scalar product of the magnetic field with J. So in the vector model, we calculate the Zeeman energies in that way, but just to mention that if you don't like the vector model and the assumption of rapid precession, just take this matrix element. It's exactly the same. In other words, I give you the intuitive picture what is inside those matrix elements. So let's evaluate that. Let me factor out the Bohr magneton. We have J squared, taking one of each bracket. And now assuming that the g factor of the spin is 2, the magnetic moment is the Bohr magneton times L-- the g factor of L is 1-- plus S, but the g factor of S is 2. So this is now the magnetic moment accounting for the two different g factors we projected on the J-axis. And the second racket, B dot J becomes the value of the magnetic field. We assume the magnetic field points in the z direction, so therefore, it is the z component of the total angular momentum. Let me collect the simple terms. Now L plus 2S, because L plus S is J, can be written as J plus S. So now we have the product of J with J, which gives us J squared, and then we need the product of S and J. And as usual, we can get an expression for that by using the summation of angular momenta. If we square it on the right hand side, we have the scalar product of J and S, but we have now the scalar product of J and S expressed by L squared J squared. J squared plus S squared minus L squared divided by J squared. Now, we're just one line away from the final result. Jz is a good quantum number. It's MJ, the projection of the total angular momentum on the z-axis. The bracket here is now the famous result for the lambda g factor. So we have J squared over J squared, which gives us 1. And then I simply put in the quantum numbers for J squared, S squared, L squared, which is J times J plus 1 plus S times S plus 1 minus L times L plus 1, and we divide by 2 times J plus 1. So therefore, the Zeeman structure in a magnetic field is the Bohr magneton times the magnetic field times the angular momentum in the J direction, but then we multiply with the g factor. These are now limiting cases. If we do not have spin, then the only ingredients, the only [INAUDIBLE] angular momentum is orbital angular momentum and we have a g factor of 1. So the lambda g factor simply becomes gL. In the case we don't have angular momentum, you can just evaluate this expression for L equals 0. You find indeed that the g factor is 2. But it can have different values. It depends on the atomic structure. That's all I want to say about fine structure in a magnetic field. The next step is now hyperfine structure. We are adding one more vector to the game. So we add angular momentum of the nucleus, so now the game being played is not only L and S. We have I and B. It's a game of the four vectors and eventually how they precess around each other, and that gives rise to the structure of hyperfine levels in an external magnetic field. We assume that L and S have coupled to J, so we have actually a coupling of J, I, but now we have an external quantization axis in Zeeman. It has to be due to the external magnetic field. And of course, in hyperfine structure, we discussed that I and J are no longer conserved angular momenta because they couple to a total angular momentum, which is F. So our Hamiltonian is the Hamiltonian without any kind of hyperfine and fine structure when we have the hyperfine coupling, which couples I and J with the product I dot J. And then we have an external magnetic field, which couples to the magnetic moment of the electron. This may be a smaller term, but you can easily carry with us. There's also a coupling with the magnetic moment of the nucleus. So in this case, because the hyperfine structure is smaller than the fine structure, I want to discuss both the weak field and the strong field case. Because magnetic fields of a few hundred [INAUDIBLE] may actually take into the high field momentum. I want to discuss both the low field and the high field limit. The low field limit implies that the Zeeman energies are much smaller than the hyperfine splittings. And then the way we describe the system is that J and I couple. So J, which is responsible for the magnetic moment, precesses around F, the total angular momentum, but then the total angular momentum precesses around the magnetic field. In other words, you assume that the coupling between J and I is so strong and they couple to F, the magnetic field is not breaking up the coupling between J and I. J and I together form F, and this hyperfine state, this magnetic moment, precesses around B. This is sort of the picture. You have to get used to it. J precesses around F and F precesses around the external magnetic field, B0. But again, if you don't like the precession model, just calculate the quantum mechanical energy that will stabilize the Hamiltonian. The answer is identical. The Zeeman Hamiltonian couples to the magnetic field, and we have two contributions to the magnetic moment, the electron of the nucleus. And in the weak field limit, we use a treatment which is almost completely analogous to the treatment we used when we derived the [INAUDIBLE] g factor. We can treat the same as Hamiltonian perturbation theory, and it's exactly analogous when we added a weak magnetic field to the fine structure. In the vector model, we have the coupling of J and I to F. The relevant term in the Hamiltonian is we have the magnetic moment of the electron, which is proportionate to J and it couples to B. So this relevant term, this is fully analogous to what I did five or 10 minutes ago, has to be replaced in the presence of the nuclear angular momentum. We have to project everything on the axis of the total angular momentum, F. Therefore, the Zeeman Hamiltonian had the contribution to the magnetic moment due to the electron and due to the nucleus. This is proportionate to J, but now we have to project it onto F. And similarly, the magnetic moment of the nucleus is proportionate to I, but what matters is the projection on F. And since I factored out the Bohr magneton, the magnetic moment of the nucleus is proportionate to the nuclear magneton. I have to account for the ratio. What matters now is the projection of F on B0. So therefore, collecting all the terms, we have the Bohr magneton, which is setting the scale of the interaction. The last term is the magnetic field, but the projection of F onto the magnetic field gives us the mF quantum number, and all the rest is called the g factor of the hyperfine structure. And the g factor of the hyperfine structure is-- let me just simplify and neglect the small contribution-- it's 1,000 times smaller-- of the nuclear magnetic moment, but if you want, you can easily include it. With this approximation, the g factor of the hyperfine structure is this. It's proportionate to the lambda g factor we just derived. And then using exactly the same thing, you have J dot F. You can express it now by the quantum numbers of F squared, J squared, I squared. You find the final result, what are the g factors of the hyperfine levels, of the hyperfine states. So this is the hyperfine structure of atoms [INAUDIBLE] magnetic fields. Let's immediately go to the high field limit. The high field limit means that the electronic Zeeman energy is much larger than the hyperfine coupling. And that means always, when we treat the problem, we first take care of the big contributions to the Hamiltonian. We try to solve it, if possible, [INAUDIBLE]. And then the weaker term can be any perturbative. So now we are in the situation that the Zeeman coupling is the big term and the hyperfine coupling is the weaker term. So in other words, what comes first now is the Zeeman energy, so we are not coupling the electronic angular momentum and the nuclear angular momentum to total angular momentum because this coupling is weak. We rather say that the electronic and the nuclear angular momentum align with the magnetic field. We quantize along the direction of the magnetic field, and then later we add the hyperfine coupling in a determinative way. So B0 now quantizes J along the direction of the magnetic field, and therefore we use, as a good quantum number, J, the projection of J on the external magnetic field axis. So this takes care of J. J and mJ are good quantum numbers. What about I? Does the nuclear angular momentum and the nuclear magnetic moment strongly couple to the magnetic field? Well, the answer is yes, but the argument is a little bit more subtle. The direct coupling of the magnetic moment of the nucleus with the magnetic field may be smaller than the hyperfine interaction. So then you would say, the nuclear angular momentum should not couple to the magnetic field. It should first be coupled to the hyperfine interaction. The hyperfine interaction is I dot J. However, J, which couples strongly to the magnetic field because it couples to the Bohr magneton, has already been coupled to the magnetic field. So therefore, the hyperfine interaction, which is I dot J, is now modified because J couples to the magnetic field, which means we have to project it onto the magnetic field axis. So therefore, the nucleus now experiences an electronic magnetic moment or electronic angular momentum, which has already been coupled to the z-axis. And therefore, the hyperfine interaction is also coupling the nuclear angular momentum to the z-axis. So therefore, the result is that it is now this indirect coupling. You couple the electron angular momentum to the z-axis and the electron angular momentum couples the nuclear angular momentum to the z-axis. So now this quantizes the nuclear angular momentum along the z-axis, which means that n sub I becomes a good quantum number. Anyway, maybe the result is even simpler than the explanation. Our Zeeman Hamiltonian now simply means that we have an external magnetic field and you the electron couples to the magnetic field, so what matters is the projection, mJ. The same happens for the nuclear magnetic moment, and now we have to add the hyperfine interaction, which was originally I dot J, but since I and J are projected on the z-axis, what is really left over are only mI and mJ. I could have gotten this expression immediately by just telling you, J and I no longer couple to F. This is destroyed by a strong magnetic field, and the good quantum numbers are J and I and their projection, mJ and mI. And then just writing down the expectation value of the Hamiltonian in these spaces would have immediately given me this result. I wanted to give you the more mechanistic explanation what's going on inside the atom and what leads to this result. I have discussed for you the two limiting cases, the weak field and the strong field case, but you can solve it also for intermediate fields. You simply have to do an exact diagonalization of the Hamiltonian, which involves the hyperfine coupling. And the hyperfine coupling, if you want, can be diagonalized as eigenfunctions where the quantums numbers are J, I coupled to F, and the projection of F, the magnetic quantum number is mF. But now we have the Zeeman Hamiltonian, where everything is projected on the z-axis, so we have mJ and mI. And the Zeeman term can be diagonalized in a different basis, which is the basis of J, I, mJ, and mI. So I've shown you the weak field limit, where we simply assume those quantum numbers and calculate this term determinatively, and I've shown you the high field limit, where we used those quantum numbers and calculated this field determinatively. But in general, you just have to write down the matrix element of this Hamiltonian in whatever basis you choose. You can use the weak field basis. This term is diagonal. This is off diagonal. Or you can use the strong field basis where this is the diagonal and this is off diagonal and simply diagonalize your Hamiltonian. Find the wave function, find the I energies. And since, for cases where S equals 1/2, it's only a two by two matrix which has to be diagonalized, you can do it analytically, and this leads to the famous [INAUDIBLE] formula. So the solution is analytic for J equals 1/2, and it's a beautiful example which you should solve in your homework assignment. Let me just sketch the solution. When you go from the weak field to the strong field limit, the z component of the total angular momentum in one case is mF. In the other case, it is mI plus mJ. So when you go from one limit to the next, you connect only states where mI plus mJ equals mF. So the structure of the general solution can be explained by repulsion and anti-crossings of states with the same [INAUDIBLE] number. So let me show you the weak field, the strong field limit, and do a graphic interpolation. What I've chosen as an example is the case of a 2 duplet S 1/2 ground state and a nuclear angular momentum of 3/2. Examples for that is sodium and rubidium 87. This is magnetic field. At weak magnetic field or zero magnetic field, the spin 1/2 couples to the nucleus in 3/2, and that gives to hyperfine states F equals 1 and F equals 2. The splitting is given by the hyperfine constant, a, and the hyperfine interaction is A times H times I times S. The structure is such that the center of mass of the energy levels is preserved, so therefore, one state is moved up by 3/4 a. The other state is moved down by 5/4. And since F equals 1 has three components, F equals 2 has five components, the center of mass is preserved. We have calculated the g factor for those states, and the g factor tells us what is the structure in weak magnetic fields. So this is the weak magnetic field solution. At high magnetic field, you know what you have in high magnetic field. You have a single electron which can spin up and spin down. So if you have an electron which is spin up and spin down, it pretty much is linear Zeeman shift for spin down and linear Zeeman shift for spin up. This is sort of what we expect. So what will happen is that the energy levels will evolve like this. So in other words, we have eight levels. We have the structure at weak magnetic fields. At high magnetic fields, of course, we also have eight levels, but they pretty much group into spin up of the electron, spin down of the electron. And then there is a smaller hyperfine structure on top of it because now the nuclear spin can have various orientations. And I equals 3/2 state has four orientations, so therefore, electron spin up and electron spin down will obtain four sub-levels. And if you ask how did I connected, I've connected the quantum numbers as such what is here, the states are labeled by mI and mS, and here, they're labeled by mF, but mF equals mI plus mS. This is how you correlate the states in the high field case to the states in the low field case. I'm running out of time, but here, we have mJ equals 1/2. Here we have the electron spin minus 1/2. These four levels are now four different quantum numbers for the nuclear angular momentum, which are minus 3/2, minus 1/2, plus 1/2, and plus 3/2. And this is what I meant by avoided crossing. At some point, I think, draw it yourself, put the quantum numbers on it, and you'll learn a lot by doing it. What you realize also when you solve the Hamiltonian that this structure can be explained the following way. You will always find you have some states which are stretched, where there is only one state which has the maximum angular momentum inside all the stretched states. And then the other states, you always need to find two states which have the same total mF, and those two states avoid each other. You can say, just pointing on two states, those two states, let's just assume they have the same mF. They undergo an avoided crossing, and that's exactly what you get out of the diagonalization of the two by two matrix. So this whole diagram can be understood by you have stretched states which form a one by one matrix. There is no recoupling taking place. And then you have three pairs of states which form two by two matrix, and in each pair, if you would now focus on it, you really see the avoided crossing typical for a two by two matrix. Any questions about that? The next thing would be to go through some bigger questions and review atomic structure, including external magnetic fields. But we'll do that at the beginning of the next class.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
25_Coherence_V.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So over the last lecture, we have talked about coherence within an atom, coherence between two levels, coherence between three levels. And today, in the last class, we want to talk about coherence between atoms. So this is now, I think, for the first time in this course that we really have more than one atom. Well, maybe we discussed some collision or broadening, or we discussed [INAUDIBLE] interaction between two atoms. But usually, atomic physics is one atom at a time. But now we want to understand one important phenomenon which happens when we have many atoms. And the phenomenon is called superradiance. So I left something good for the end. And superradiance has in common the word super with superconductivity and superfluidity. And it really represents that the atom, many atoms act together. And the word super also means coherence among atoms-- superfluidity and superconductivity have macroscopic wave function where all the atoms, the meta waves are coherent. The phenomenon of superradiance, as we will see, has not so much to do with coherent atoms. It has more to do with coherent photons. So it's more-- some people regard superradiance as a laser without mirrors. But you'll see where the story leads us to. So just to set the stage-- for many atoms, we should first talk about single atoms. And all that is described in this landmark paper by [INAUDIBLE] in 1954, which is posted on our website. So if you have a single atom prepared in the excited state, it decays to the ground state, and we want to characterize the system by emission rate as a function of time. So the emission rate initially is gamma, the natural language of the excited state. And then, of course, the emission rate decays because we don't have any atoms left. Similarly, the probability to be in the ground state is zero initially, and then with an exponential approach, it eventually goes to unity after a while, after we have only atoms in the gamma state. So this is rather straightforward. But now we want to bring in a second atom. And I'm asking, what happens when we have not the one atom, but two atoms? One is in the ground state. One is excited. So pretty much what we have added to the original situation with one excited atom was we've brought in one ground state atom, which naively you would think does nothing. But that's not the case. What happens is-- and I assume just for review-- we will drop the assumption later, but we assume for now that all the atoms are within one optical wavelength. What we then realize is for two atoms-- and I will show you that in its full beauty-- that the initial rate of light which comes out of the system is the same. So the extra ground state atom does not change the initial emission rate, but it goes down faster. And if we ask what is the probability that the atom is in the ground state, we find that it's only one half. So in other words, normalized [INAUDIBLE] system, we have a ground and excited state atom, and what comes out is only half a photon. Half of the atoms do not decay. So it's not the same rate and the same decay. Something profoundly has happened. And this is what you want to understand. So let me give you the correct answer. The rate of emission is a function of time for this situation. We start out with gamma, but then the emission decays, not with gamma but with 2 gamma. And the probability that both atoms are in the ground state-- or that the second atom, so to speak, is in the ground state-- will only asymptotically go to 1/2. And it does so exponentially-- but again, with the time constant, which is two times faster than for the single-atom system. So we have the same initial emission rate, but only probability of 1/2 to emit at all. So in order to understand it, we have to look at an atom in the excited state and atom in the ground state. And we want to write down the wave function as a superposition of a symmetrized and antisymmetrized wave function. I should tell you, I'm going very slowly for two atoms. And then once I've introduced the concept for two atoms, with a few pen strokes, we can immediately discuss in atoms. So all the phsyics, all the understanding what goes on in superradiance is already displayed for two atoms. So we want to have a superposition of symmetric and antisymmetric wave function. The symmetric one is a normalized wave function which is ge plus eg. And we call that the superradiant wave function, for reasons which will become clear in a moment. And if you have a minus sign here, the antisymmetric combination, we call this subradiant wave function. Now, what happens is, we have to consider-- so we have symmetrized the wave function. Well, I didn't really tell you why, but it's always good to symmetrize. Symmetry is, if you can use it, something good. And the reason why I symmetrized it is because I want look at the interaction Hamiltonian. And if I look at the interaction Hamiltonian-- the one we have seen many, many times but now for two atoms-- we will immediately realize that this interaction Hamiltonian is symmetric. So therefore, if the Hamiltonian is symmetric, it's a really good starting point to have the wave function for the atoms expanded in a symmetric basis. And since I want to emphasize that the whole story I'm telling you today has nothing to do with the kind of second quantization-- it is about spontaneous emission, but it's not involving any subtlety of spontaneous emission and field quantization-- I want to write down the interaction Hamiltonian both in a classical and a quantum mechanical way. In the classical way, we have the dipole moment d1. We have the dipole moment d2. And the atoms talk to the electric field at position RNT. And now you realize where some of the assumptions are important, since the atoms are localized to within a wavelength, they rarely talk to the same electric field. There are no phase factors. In about 55 minutes or so, we introduce phase factors for extended samples. But for now, we don't. And therefore, what the atoms couple with is with a dipole moment, which is a sum of the two dipole moments. So this is classical or semi-classical. So what enters in the Hamiltonian is only the sum of the operators for the two atoms. And the same happens in the QED Hamiltonian. And actually, I will get a little bit more mileage out of the QED Hamiltonian, as you will see in a moment. Because with the QED Hamiltonian we describe the atomic system-- so at first atom one-- with the raising and lowering operator with the atoms interacting with a and a [INAUDIBLE]. And then I have to add the term where the index one and two are exchanged. So we are introducing here-- that's convenient for two-level atom-- the spin notation sigma plus and sigma minus are the raising and lowering operator which flip the atom from the ground to the excited state and vice versa. But the important part now is-- and this is where, actually, everything comes from in superradiance-- that the coupling involves not the individual spins, little sigma plus, sigma 1 and sigma 2-- it only involves the sum of the individuals. i equals 122, and later we extend the sum to n. So therefore, what matters for the interaction of the atoms with the electromagnetic field is the sum of all the atomic spin operators. And the sum is, of course, symmetric against exchange. So therefore, when we are asking what is the coupling of the state which I called the superradiant state, the one where we had symmetrized eg plus ge, or we ask, what is the coupling of the subradiant state to-- well, the state where both atoms are in the ground state. Well, now we can use symmetry. The left-hand side is symmetric. The operator is symmetric. And now only the symmetric state will couple. The antisymmetric state will not couple. So therefore, the subradiant state, eg minus ge, cannot decay. That's why we call it subradiant. I think a better word would be non-radiant, but non is definitely subradiant. And for the matrix element eg and ge, we find that we have actually an enhancement of the coupling by a factor of square root 2. So now we pretty much know what we have to do. You want to use the symmetry of-- let's assume we consider ground and excited state of each atom as spin 1/2. But now we want to look at the total spin, the total pseudo angular momentum of the two atoms, and later we extend it to n atoms. So we want to use now the power of the angular momentum description. And that goes like follows. We have four states of two atoms. And this is gg, ge, eg, and ee. And if I denote with ground state spin down, excited state spin up, I'm talking about 2 spin 1/2 states. And 2 spin 1/2 states can couple to s equals 1, total s equals 1, and total spin s equals 0. And that's what I've done here. I've arranged the states ee, the symmetric superradiant state, the ground state, and the subradiant state. A variation energy level diagram-- here we have 0 excitation energy, here we have 1 excitation energy, and here we have two excitations energies of the atom. But I've also labeled now the spin labels for the combined system. Those symmetrized states correspond to a spin equals 1. It's a triplet letter with three different magnetic quantum numbers. m equals plus 1 means everything is highly excited. m equals minus 1 means we are in the total ground state. And here we have the simulate state, which is the antisymmetric state or the subradiant state. And our interaction Hamiltonian is the total spin plus minus. It is the raising and the lowering operator. And you know that the raising and lowering operator for the spin is only making transitions within a manifold of total s. It just changes the end quantum number by plus minus 1. So the Hamiltonian cannot do anything to the simulate state, because there is no other simulate state to couple. But within the triplet manifold, the sigma plus sigma minus operator is creating transitions between the different end states. And the coupling constant, which for an individual atom was little g is now factor of square root 2 enhanced. And we will see in a few minutes, that for n atoms, it's square root n enhanced. And if any speak, that's where the word super in superradiance comes from. Yeah, actually let me just quickly add the diagram for the single atom. The single atom has only an excited state, a ground state. It corresponds to s equals 1/2. And we have magnetic quantum numbers off plus 1/2 and minus 1/2. And the coupling due to the light atom interaction goes with the coupling constant g. So the key message we have learned here is that when we have several atoms within an optical wavelength, we should use for their description symmetrized and antisymmetrized states. Or when we generalize to more than two atoms, we should just add the total angular momenta by treating each atom as pseudo spin 1/2. And it is this angular classification which tells us how the radiation proceeds. Because the coupling to the electromagnetic field is only involving the lowering and raising operators for the total spin. And this only acts on a manifold where the total spin s is conserved. And what we get is transitions with delta in plus minus 1. So the question, have those effects been observed? Yes, they have, actually. And they're important for a lot of research. But just for two atoms, the simplest observation is when you take two atoms-- let's say two sodium atoms-- bring them very close, and you form a sodium 2 molecule. And to some extent, in four states where the molecule are binding is not completely changing the electronic structure, we can regard the sodium 2 molecule as consisting of two sodium atoms. And indeed, if you do spectroscopy of the sodium 2 molecule, you find some molecular states which are very long lived, like the subradiant states, which do not radiate at all, but then you find states which have a spontaneous emission rate which is about two times faster than the spontaneous emission rate. So you find that you can understand some radiative properties of molecules by assuming that they are related to the sub and superradiant state of the two atoms which form this molecule. So an example here is sodium 2 molecule. A state where the gamma molecule is approximately 2 times gamma sodium, or other states where it's very small. OK. Now we understand the basic four of superradiance in two atoms. And therefore, we can now generalize it to end particles. But before I use the spin algebra to describe end particles, I want to glean some intuition where we just consider-- and this takes us back to the beginning of the course-- where we consider end spins in a magnetic field. And I really invite you to think now completely classically. We'll describe it quantum mechanically in a moment. But I've often said in this course, if in doubt, if you have a classical description and a quantum mechanical and they seem to contradict, usually there is more truth in the classical description. It's so much easier to fool yourself with the formalism of quantum mechanics. So let's take end spins in a magnetic field and ask what happens. So we have end spins. So these are now real spins. They have a real magnetic moment. These are tiny little bar magnets. And we do pi over 2 pulse. And after we've done a pi over 2 products, the spins are aligned like this. Let's assume we had our magnetic field. And now what happens is these spins will precess at the line of frequency. So now you have your end spins. They precess together. And if you have a magnetic moment which oscillates, the classical equation of electromagnetism tells you that you have now a system which radiates. But compared to a single atom, the dipole moment is now n times the single atom dipole moment. So therefore, what do we expect for the radiated power? Well, if the electromagnetic radiation by an oscillating electric or an oscillating magnetic dipole moment scales with a dipole moment squared, therefore, we would expect that the power radiated is proportional to n squared. And that means I have to take the perfect of n [INAUDIBLE]. This means this is n times higher than if you assume you have n individual particles, and each of them emits electromagnetic radiation. what I'm telling you is if you scatter n spins through your laboratory, you excite them. Pi over 2 pulse, they radiate. They radiate a power which is proportion to n. But if you put them all together, localize them better than the wavelengths, their radiated power is proportional to n square, which is an n times enhancement. So the way how I put it for n spins-- and this is a situation of nuclear magnetic resonance-- this is the completely natural picture. But if I would have asked you the question-- let's take n atoms which are excited and put them close together, you say, well, each atom does spontaneous emission, and if you have n atoms, we get n times c intensity you would have gotten a different result. So we are so accustomed to look at spins in NMR as a coherent system, look that all the spins add up to one giant antenna, to one giant oscillating dipole moment, whereas for atoms, we are so much used to saying each atom is its own particle and thus its own thing. So for n excited atoms, they are usually regarded as independent. However-- and this is the message of today-- there shouldn't be a difference. All 2 level systems are equivalent. Side remark-- for NMR in spins, it is much, much easier to observe the effect, because the condition that all the spins are localized within one wavelength is always fulfilled if the wavelength is meter or kilometers. But if you have atoms which radiate at the optical wavelengths, this condition becomes nontrivial. That is partially responsible for the misconception that you treat the two-level system which is a spin in your head differently from the two-level system which is an atom. So the important difference here is lambda. And we have to compare it with a sample size. And usually, the sample size is much larger in the optical domain, and is much, much smaller in the NMR domain. However-- and that's what we'll see during the remainder of this class-- some of the dramatic consequences of superradiance will even survive under suitable conditions in the extended samples. So when we have samples of excited atoms much, much larger than the optical wavelengths, we can still observe superradiance. So therefore, for pedagogical reasons, I first complete the focus on the case that everything is tightly localized. We derive some interesting equations, and then we see how they are modified when we go to extended samples. But I want to say, the intuition from spin systems, the intuition from classical precession and nuclear magnetic resonance, will help us what happens for electronically excited atoms. So we want to use this other spin 1/2 system as a powerful analogy to guide us. So before I start with the angular momentum formalism, I want to emphasize that what are the ingredients here. Well, we're talking about coherence-- coherent radiation, coherence between atoms-- and we'll talk about radiation. And the important part here is the following. That when we talk about radiation, we have the situation that all atoms interact with a common radiation field. In other words, all the spins, all the atoms have to emit their photons into the same mode of the electromagnetic field. And therefore, you may be right in some limit that the atoms are independent, but not the photons they emit. They go into the same mode. And therefore, the emitted photons cannot be treated independently. And that's why the classical picture is so powerful for that. Because in the classic picture, we do a coherent summation of the field amplitudes. So we have constructive interference. The superposition principle of field amplitudes build into our equations and deeply engraved in our brains. And that's why when we use classical arguments, we automatically account for that the photons interfere, that the photons are emitted into the same mode of the electromagnetic field. And eventually, this leads to the phenomenon that we have coherence and enhancement when we look at spontaneous emission for n atoms which are sufficiently localized. So let me also discuss what we have assumed here. Number one is, we have assumed we have a localization of the sample smaller than the optical wavelengths. The other thing-- and this is really important-- we are talking here about a collective phenomenon where n atoms act together and do something. They develop the phenomenon of superradiance. They decay much, much faster than any individual atom could do by itself. But nevertheless, we have not assumed-- or we have actually excluded in our description-- that there is any direct interaction between the atoms. The atoms have no [INAUDIBLE] interaction. They're not forming molecules. They're not part of a solid with shared electrons. The atoms are, in that sense, non-interacting. And therefore, in a way, as long as they are just atoms, independent. Finally-- and I want you to think about it-- you can think about already for two atoms before we generalize it to n atoms. Think about it. What was really the assumption about the atoms? Do the atoms have to be bosons to be in this symmetric state? Can they be fermions? Or can they be even distinguishable particles? If the two atoms where one would be a sodium atom and one would be a rubidium atom-- but let's just say we live in a world where sodium and rubidium atoms emit exactly the same color of light. Would we have been sub and superradiant state for two atoms, one of which is sodium and one of which is rubidium? Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: Exactly. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes. And that confuses many people. It is the indistinguishability of the photons they have emitted. It is the common mode where the photons are emitted. The atoms can be distinguished. Also, we've made the assumption that the atoms are localized to within an area which is much smaller than lambda. But you could imagine you have a solid state matrix and you have one atom here, one atom there, and you can go with a microscope and distinguish them. So therefore, the moment you can distinguish them because they are pinned down in a lattice-- or if you don't like a lattice, take two microscopic ion traps a few nanometers apart, and you tightly hold onto two ions-- it doesn't matter whether they are bosons or fermions. It only matters whether your bosons or fermions when the atomic wave functions overlap and you have to symmatrize it. As long as you have two atoms which are spatially separated, it doesn't matter whether they are bosons or fermions. And that also means they can be completely different atoms. You can already call the Boson A, Boson B. Now you can call it sodium and rubidium, and they can have different numbers of nucleus. They can be different numbers of neutrons in the nucleus. It could be different isotopes of the same atom. The whole collective phenomenon comes when they emit a photon into the same mode. OK. So now we want to treat to a formula treatment for end particles. So we have now the individual pseudo spins one half. We perform now with some overall end particles. We get the total spin s. The total spin s quantum number has to be smaller or equal to n over 2, because we have m spin 1/2 systems. The end quantum number is 1/2 times the difference of the atoms which are in the excited state minus the atoms which are in the ground state. And this is, of course, trivial. Trivially must be smaller than s. Because m is this z component of s. And we are now describing the system by the eigenstates s and n of the collective spin. So that means we have the following situation. We have a manifold-- we want to show now all the energy levels. We have a manifold which has a maximum spin n over 2. The next manifold has n over 2 minus 1. And the last one has-- let's assume we have an odd number of particles-- x equals 1/2. So here, we have now n energy levels. We can go from all the n atoms excited to all the n atoms being de-excited. In the following manifold, we have s is one less. And therefore, we have a letter of states which is a little bit shorter. And eventually, for s equals 1/2, we have only two components plus 1/2 and minus 1/2. So those levels interact with the electromagnetic field. The operator of the electromagnetic field, we have already derived that, involves the sum of all of the little sigma pluses, sigma i pluses, and we call the sum of all of them s plus and s minus. And the matrix element is now for spontaneous emission. You have a state with sm. s minus is the lowering operator for the n particle system, so it goes from a state with a certain number of atomic excitations to one excitation less, and that means this is the act of emitting spontaneously one photon. The operator s minus stays within the s manifold, so we state in the same letter, which is characterized by the quantum number s. But we lower the end quantum number by one. The end quantum number is a measure of the number of excitations. And we know from general spin algebra that this matrix element is s minus m plus 1 times s plus n. There are, of course, pre-factors like the dipole matrix element of a single atom. But I always want to normalize things to a single atom. And by just using the square root, if you have a single particle, which is in the s to m equals 1/2 state, then you see that it this square root is just 1. So therefore, for when I discuss now the relative strengths of the transitions between those eigenstates, I've always normalized to a single particle. For a single particle, the transition matrix element is 1. OK. So therefore, what we want to discuss now is, we want to discuss the rate, which is the matrix element squared, or the intensity of the observed radiation relative to a single particle. So the intensity-- and this is what we are talking about-- is now the square root of the square root, or the square of the square root, which is s minus 1 plus m times s plus m. Pretty much, this is the complete description of superradiance for strongly localized atoms. It's all in this one formula. Once we learned how to classify the states, we can just borrow all the results from angular momentum, addition, and angular momentum operators. So I want to use this formula for the intensity and look at which is the most superradiant state, the state where all the particles are symmetric. And this is a state with s equals n where the spin is n over 2. So I'm looking now at the letter of m states, and I want to figure out what happens. So the maximum m state is m equals s, all the atoms are excited. And now the first photon gets emitted. So just put s equals m equals n half into the formula for the intensity, and you find that the intensity gives us this expression is just n. So we have n excited atoms, and they initially emit with an intensity which is n. And this is the same as for n completely independent atoms. So nothing really special to write home about. But now we should go further down the ladder, and let's look at the state which has m equals zero. Well, then the intensity of the matrix element squared for the transition, which goes to m equals zero, has an intensity which is n over 2. I will just look. s is n over 2, and m is zero. So we have the question whether we have odd or even number of particles, but it doesn't really matter. What dominates is always the big factor n over 2. So what we find out is that we have an enhancement, huge enhancement over independent atom, because this intensity goes with n squared, and this proportionality to n squared, this is a hallmark of superradiance. So this is what is characteristic for superradiance. We have an n times enhancement relative to the a singular atom. So this is one important aspect. Now, in the classical picture, that should come very naturally. If you have all the spins aligned and they start the [INAUDIBLE] procession, there is not a lot of oscillating dipole moment. But when half of the spins are de-excited, they are now in the XY plane. Now you have this giant antenna which oscillates and radiates. So it's clear that at the beginning, the effect is less pronounced, and if you're halfway down the Bloch sphere, then you would expect this n times enhancement. But now let's go further down the ladder and ask what happens when we arrive at the end. So I'm asking now, what is the intensity when the last photon gets emitted. There is only one excitation in the system. And the answer is, it's not one like an independent atom. If you inspect the square root expression, you find it's n. So we have one excitation in the system, but it's completely symmetrized. And therefore, we have an n times enhancement. And I want to show you where it comes about. So there's only one particle excited. And here, we have an n times enhancement. By the way, the states with the classification s and m are called the [INAUDIBLE] state. And this state here, which has a single excitation but it radiates n times faster than a single atom, is a very special [INAUDIBLE] state. And there is currently an effort in Professor [INAUDIBLE] lab to realize in a very well-controlled way this special [INAUDIBLE] state in the laboratory. There are non-classical states, because they're not behaving as you would maybe naturally assume a system with a single excitation to behave. So let's maybe try to shed some light on it. One way how you can intuitively understand superradiance is really with a classical antenna picture that you have n spins which form a microscopic dipole moment which oscillates. And this is a very nice picture to understand the n times enhancement when we have half of the atoms excited and the other half de-excited. Let me now give you a nice argument which explains why a single excitation in this system now leads to an n times enhanced decay. The situation is that the initial state for this last photon is, we have an excited atom, and all the atoms are in the ground state. However, we could also have in this nomenclature the second atom excited. Or we could have the third one excited, and so on. So therefore, what we have is-- because we are in the left-most [INAUDIBLE] which has the maximum s spin quantum number of n over 2, that means everything is fully symmetrized. So therefore, we have to fully symmetrize by summing over the n possibilities. And our final state is, of course, the fully symmetrized [INAUDIBLE] state. And now you realize that you have a coherent summation over-- you have n contributions. So therefore, the matrix element has n contributions compared to single atom. The normalization is only square root n. So therefore, the matrix element is square root n times larger than for an individual atom. So by simply having one atom excited and n minus 1 atom not excited, but if you now have the fully symmetrized state, you don't know for fundamental reasons which atom is excited. You have a superposition state where the excitation can be with either of the atom. This state, which has a similar quantum of excitation, radiates n times faster than a single atom would. Let me make a side remark. Maybe some of you remember when we went to [INAUDIBLE] QED, we had just proudly quantized the electromagnetic field, and we discussed the vacuum Rabi splitting. And I told you that the vacuum Rabi splitting is if the cavity is not empty but is filled with n atoms, because of the matrix element of the a dagger operator, you get an enhancement of the vacuum Rabi splitting, which is square root n, then photon number. But then I showed you the important first observation, the pioneering research at Cal Tech by JF Kimball and Gerhard Rempe, and they didn't vary the photon number. They varied the atom number. And when they had more flux in the atomic beam, the Rabi splitting became larger and larger. Well, we've just learned that when we have n atoms, that the matrix element for emitting the photon is square root n times enhanced. So if you put n atoms in a cavity filled with little n photons, the Rabi splitting between the two modes has square root n plus 1 in the photon number and square root big n in the atom number. So the effect I've shown you in the demonstration of the vacuum Rabi splitting is this scaling with the atom number. This actually can be understood as a superradiant effect. OK. So that's pretty much what I wanted to tell you about the basic phenomenon of superradiance. Now I want to discuss two more things. One is superradiance in an extended sample. Ad we have time for that. But I also want to discuss with you the question. Let's assume we have the same system, and we just convinced ourselves, yes, it's superradiant. Photons are emitted n times faster. Now, what would you think will happen when we are not looking for spontaneous emission, but we shine a laser light on it, and we are asking for induced emission? Or the other way around-- we ask-- and you know that from Einstein's treatment, it's completely recipocal-- where we are asking the question, what happens to the absorption process? So is a stimulated emission process or an absorption process, are they also enhanced n times? I don't know. Do you have any opinions about that? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes. It's a subtle way of counting. I've shown you that certain matrix element-- especially the matrix element when the spin is in the middle, is at 90 degrees-- that we have matrix elements which are n times enhanced. And of course, if you ask for absorption or stimulated emission, we are talking to a system which has an n times enhanced matrix element. And you would say, things go n times faster. Why don't you hold this thought for a moment? Now let me just wear another hat and let me say that we have assumed that we have n independent spins that are closely next to each other, but they're not interacting. And now I take these n spins, and for stimulated emission and absorption, we can just use a picture of Rabi oscillation. On the first Rabi cycle, we emit. On the next Rabi cycle, we absorb. So if I take now-- and why don't you think not about these pseudo spins electronically, atoms with [INAUDIBLE] excitation-- just think of real spins which have a magnetic moment, and you drive them to the magnetic field. So now you have your n little spins. You apply a magnetic field to them, time-dependent magnetic field, and the time-dependent magnetic field is now driving the spin in Rabi oscillations. And the external drive field talks to one spin, talks to the next, talks to all of them. But each of the spins does exactly the same Rabi oscillation it would do if all the other atoms where not present. So the picture is, you have an external field. All the atoms couple to the external field. But the coupling of each atom to this external field is exactly the same as of a single atom coupling to the external field. And the Rabi frequency for each atom is exactly the Rabi frequency you would get for a single atom. So therefore, based on this picture, I would expect that I have my end spins, and these can now be real spins with a magnetic moment, or can be atoms in the electronically excited state. I coherently drive them with a drive field, and they will do Rabi oscillations. But the frequency or Rabi oscillations will be the same as for a single atom. OK. We've just held the thought that their matrix elements in the Dickey states, which are square root n times larger, and the Dickey state seem to suggest something to us which would say there should be an enhancement, whereas the analysis in independent atoms, which are driven by an external field, also seems compelling. So now we have to reconcile the two approaches. Is the question clear? We want to figure out-- we have matrix elements in the Dickey state which suggest enhancement, but the simple picture of n independent atoms driven by an external field would say there is no enhancement. OK. So let me just write down more formally what I explained. When we have an initial state, which is all the atoms are in the ground state to the power n, and the state n would evolve when it is driven in a state phi of t, so now the exact wave function for our n particles is nothing else than the time-dependent solution of Schrodinger's equation for the single particle. So this is single particle to the power n. So therefore, this is pretty much a mathematical proof unless I've made a mistake, which I haven't. So it takes exactly half a single atom Rabi period to completely invert the population exactly s for a single atom. So that's the result. However, if you describe the system by Dickie states, you have matrix element which are matrix elements which are proportional to n. However, if you want to-- I've described it just as a two-level system. Each atom does Rabi oscillation. I've said, OK, the system of n atoms is just n individual systems. But if you insist to describe it as a collective spin, then we have the Dickie states. Then we have the n times enhancement of the matrix element. But then we also have to go through n states. So we have n steps in the Dickie letter ladder. And one can say now-- and this is the exact argument-- you have n steps. You take each step n times faster, but the total time is the same. So n times 1 over n is 1. OK. But now when we talk about spontaneous emission, we are not driving the system with an external field. It's really driven by the system itself, which emits photons into the empty mode. Spontaneous emission, each step is proportional to the matrix element squared, because we're talking about [INAUDIBLE] spontaneous emission. So this is proportional to n square. And if you say that we have n steps, well, then we have n squared over n. Then we have a speed up. Each step is n squared faster. We divide by n, and we get the superradiant speed of which is n. So superradiance is something that you observe in spontaneous emission, but you cannot absorb it in a driven system. Because in a driven system, you can say you have a classical external field. And this external talks to one atom, to n atoms exactly in the same way. It is really the interference of spontaneously emitted photons which is at the heart of superradiance. As a side remark, we are talking here about coherent effect, which is n times enhanced. And you can actually regard that as a kind of bosonic enhancement in the emission of photons, because the photons are bosons. When Bose-Einstein condensation was discovered and people were thinking about basic experiments, of course, one thing which was on our mind is, we wanted to show that there are processes in the Bose-Einstein condensate we are n times enhanced. For fermions, they would be suppressed. This is the flip side. Big enhancements for bosons, complete suppression for fermions. And we found that, for instance, the formation of the condensate had an n times enhancement. There was a stimulation factor. But we also thought you should actually-- there may be ways where you can observe suppression of light scattering or enhancement of light scattering. But we thought about it with two laser beam [INAUDIBLE] scattering, and the idea seemed compelling. And then we said, no, wait a moment. If you use laser beams, everything is stimulated. You can observe bosonic enhancement and fermeonic suppression only when you have spontaneous events. If you drive it in a unitary time evolution, you will not be able to see quantum statistical suppression or enhancement. And the same thing as we have seen here-- when you have a stimulated system, everything is undergoing a unitary time evolution, and the unitary time evolution for n atoms is the same as for a single atom. You need the element of spontaneous emission. So I'm not proving it to you, but I'm just making as a remark-- what we have seen here, that the superradiance only shows up in spontaneous emission and not when we drive the system-- a driven system is a unitary evolution. And the same conclusion which we just got for superradiance also applies if you want to observe fermionic suppression or pulsonic enhancement in quantum gases. It needs an element of spontaneous scattering or spontaneous emission. Yes? AUDIENCE: If we think of it in terms of interference of photons, how does that tie in here? Because if the stimulated [INAUDIBLE] photons are still interfering, then you can get emission [INAUDIBLE]. PROFFESOR: The quick answer is, you have a classical field which you use for-- you have a laser field for stimulate emission for absorption. There are so many photons in the laser field that the few photons which your system emits do not matter. They are really talking to a classical field, and it doesn't matter whether the other n minus 1 atoms have emitted a photon, because you have zillions of photons in your laser field, and they determine the dynamics of the system. OK. Super radiance would not be as important as it is if it could not observed in extended samples. So now I want to use the last 10 minutes to show you what is kept and what has to be dropped when we talk about extended samples. So let's-- it doesn't really matter, but for pedagogical reasons, let's assume we have an extended sample which is much, much longer than the optical wavelengths along the long axis of the cigar and smaller along the short axis. The second condition that the cigar is smaller than along the long axis does not really matter. No, I'm not making this assumption. So it's a cigar, and let's just assume that everything is-- just saw a contradiction in my notes. So anyway, we have now a system which is-- let's have a Cuban cigar, a really thick cigar. And this is now our extended sample. And what I need is, I need the cross section of the sample A. And let's assume the length is l. This is diameter d. It's a cigar much, much larger than here. And yes, we are talking about superradiance, we are talking about spontaneous emission. But if you see a long cigar with excited atoms, you think immediately about lasing action. The photon is emitted is amplified along the path. And of course, the preferred direction where you would expect the maximum effect to happen is when the light is emitted along the long axis of the cigar. So you want to consider now preferential modes along the x-axis. So if you now assume that you have many atoms, and they emit light, if an atom here and here would emit light in this direction, it may constructively interfere. But in another direction, it will destructively interfere. But let us now consider what is the solid angle into which all of the atoms can coherently emit. Well, you know that from classical optics, the emission into a solid angle of lambda square over a can be coherent. Sort of similar to when you have a double slit and you ask, over what angle do the two slits emit inface. You get a bright fringe, and you get a dark fringe, you get the next bright fringe. The coherence, the angle over which the pass lengths differences do not add up to more than lambda. It's the deflection-limited angle which for a beam of size d is lambda over d. And if you take it to the second dimension, the solid angle is lambda squared over d squared. So that's what I'm talking about. So if you would give all the atoms in your assemble just the right phase that they are coherent to emit into the x-axis, they will also coherently emit into a small, solid angle, and the solid angle is given by this number. So the just of it is-- and I will not completely prove it to you, but I just want to give you a taste-- is that therefore, we still have a superradiant enhancement. We know the superradiant enhancement previously when we had the localized system was n. But now we have the n atoms act together, but they're not acting together for emission into 4 pi. They are acting together for emission into the solid angle. And if I write the big n as density n times l times a squared, the a squared cancels out and I get n lambda square l. And if you remember that the cross section of an atom was lambda square for absorption, if the atom is excited, the cross section for amplification of light for stimulated emission is also lambda squared. So lambda squared is the gain cross section. And what we find now as a superradiant enhancement factor is nothing else like something which reminds us of a laser, which reminds us of optical gain. And actually, the lasing phenomenon in superradiance in extended samples has a lot of analogies. In some limits, it's even identical. When we are talking about spontaneous emission, we are not talking about stimulated emission. But if you have a system which is in some excited state superradiant Dickey states, and we are asking what are the spontaneously emitted photons coming out, to say different atoms emit into the same mode, and now you have to add up the feeds coherently, this is a language we have used so far. Or if you use a language atom emits a photon and this photon gets amplified on its way out, those two language strongly overlap or in some limits are even identical. So the amplification of a photon on its way out, this is behind superradiance. But when the localize the atoms to lessen the wavelengths, well, the atoms pretty much emit as a whole and there is no pass lengths of the size of the optical wavelengths where you can say the photon propagates, gets amplified. So we have looked at just what comes out of it. But in extended sample, you could even address this situation. How do the photons get amplified, magnified, augmented when they travel from the center to the edge? So you could actually ask, what is the light intensity as a function of the position within the cigar? For localized samples, you can't. So let me just write that down. So this is analogous to optical amplification in an elongated, inverted medium. OK. So you can formally describe that. You can now define new Dickey states with respect to the preferred mode. And the preferred mode is the mode in the x direction. So what I've done is-- remember, we have those atoms. Those atoms are now sitting at different positions, x1 and x2. And if I define Dickey states which have phase factors into the ik x1, into the ik x2, if now this atom emits a photon and this atom emits a photon, well, the second photon is x2 minus x1 ahead of this photon if you think of those atoms sitting aligned in a string. But the phase vector is exactly canceling the propagation phase for the first atom in such a way that if you are now coupling these states to the electromagnetic field, the phase factors of the electromagnetic field in the mode cancel with those phase factors, and you again have the situation that each state here has an equal amplitude for emission. So now you have n possible contributions, and the normalization is 1 over square root n, and everything falls into place. And you can define that for two excited atoms with two phase factors and so on. So you can use immediately the same formalism. And what happens is those phase factors for the interaction Hamiltonian-- and our interaction Hamiltonian is now different. It is di. And now in an extended sample, we have to keep track of the precision of the atom. So for the coupling to the mode in the x direction, we have those phase factors. So all the phase factors cancel. And actually, I'm not telling you whether this is plus or minus in order to cancel. You pick the sign that they all cancel, and then you have superradiance. You have fully constructive interference. Yes. So all this looks now the same as superradiance, but there are also things which are different, and this is the following. If the atom would emit now photons, not in the preferred mode, then-- remember, we had the Dickey ladder. We had the most superradiant ladder, little bit less superradiant ladder, and eventually we had the subradiant ladder in order of smaller and smaller total quantum number s. Emission in the preferred mode stays in each letter, and we have the superradiant cascade. But emission into other modes is now coupling different s states. Because the operator or the phase factor into the IKR has broken to complete permutation symmetry between the sides. We have changed the symmetry. We have not the completely symmetric sum. We have a symmetric summation with phase factors. So therefore, the phenomenon is somewhat different. But we still have superradiant cascade for the preferred mode. And the result is that we have an enhancement for the most symmetric for the superradiant states, which is given by that. And this is nothing else than the resonant optical density of your center. So in experiments-- many of them go on in [INAUDIBLE] lab, where he uses collective spin and the storage of single photons in n atoms, the figure of [INAUDIBLE] of his samples is always the optical density, the number of atoms times lambda squared times the lengths. Finally-- and sorry for keeping you three more minutes-- the form of superradiance which is very important is Raman superradiance. We don't have an excited state where we put a lot of excitations on, because the excited state would be very short lived. So what we instead do is, we have Rabi frequency omega 1. We have a large [INAUDIBLE] delta. And then the spontaneous emission with the coupling constant g takes us down to the excited state. In the case that the Rabi frequency is much, much smaller than delta, we can eliminate the excited state from the description. And what we obtain is now a system which has an excited state. The widths of this excited state-- this is pretty much the virtual state here-- is the scattering rate which is the probability to excite the atom is Rabi frequencied over detuning squared. That's just perturbation theory. And then we multiply with gamma or gamma over 2. So this is the rate of spontaneous emission out of the virtual state. And from this virtual state, we go now to the ground state. And the Rabi frequency, or the coupling for this virtual state, is the original coupling g between ground and excited state, but now pro-rated by the amplitude that we have mixed the excited state into the virtual state. So therefore, we have now obtained a superradiant system. And for instance, we did experiments which became classic now because they are conceptually so clear-- we took a Bose-Einstein condensate, we switched on one strong of resonant laser beam, and then we had a system which was 100% inverted, because we had no atoms into the final state. The final state is a Bose-Einstein condensate but with a recoil kick. So by just having a Bose-Einstein condensation and shining this laser light on it, we had now in this picture a 100% inverted system, which is the ideal realization of a fully inverted Dicky state. Everything is completely symmetric, and then we observed superradiant emission of light pulses. OK. So this has been the important realize-- so important experiments have been done via BECs in my group and with cold atoms in with laser code samples in [INAUDIBLE]. So why is superradiance so important? And this my last statement for this class and for the semester. So if you have extended sample superradiance, those samples are no longer coupling to the electromagnetic field with the coupling constant g. The coupling constant g is now multiplied by the optical density of your sample. And there is a lot of interest for current research for quantum computation, manipulation of photon states, and all that, to do cavity QED. And in cavity QED, we try to have very good mirrors, very small mode volume to have a very, very large g. But this large g which we achieve in a cavity, if you put many atoms in it, gets enhanced by the optical density. So the cavity enhancement and the superradiant enhancement is multiplicative. And often, it's very favorable for single photon manipulation if you do Bose. You getting enhancement form the cavity and enhancement due to superradiance. And the person who has really pioneered work along this direction is [INAUDIBLE] here at MIT. Anyway, yes, with five minutes delay, I finish the chapter on superradiance. Well, that's the end of this course. Let me thank you for your active participation. Sometimes as a lecturer, you learn as much as the students. And I think partially based on your questions and discussions, this is really true. I've learned a few new aspects of atomic physics. I hope you have learned something, too. And good luck in the future.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
3_Resonance_III.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Then let me just-- I gave you two things about the last class, one is the question about the electron and factors of 2 and what precesses at what frequency. Think about it, I mean, write down for yourself what is a magnetic moment of an electron? What is a magnetic moment of a classical particles which is one unit of angular momentum and try to sort of re derive it. The way how I did it is deceptively simple. But whenever you think about it, you will get confused about some factors. But this is, of course, where I showed you here is the quantum mechanical mystery about the g-factor of 2 that you have an electron which has one more magneton of magnetic moment, and you have a classical particle which has one more magneton. The energy levels of up and down what are the same. The energy is 1.4 megahertz per Gauss. But the precession of frequency of the electron is 2 times faster than the precession frequency of the classical particle, and this is what the g-factor of 2, which comes out of the Dirac equation means. And at least one intuitive explanation I can give you is a frequency which is observed as a precession frequency in the laboratory is the beat frequency between two neighboring levels. And here you have a level in between. So the beat frequency between two neighboring levels is 1.4 megahertz, and here the beat frequency is 2.8 megahertz. And you will actually see that if you take a classic system which has much, much larger angular momentum, maybe L equals 10. It will have this level structure, but it will go from minus 10 to plus 10. But the precession frequency is the beat frequency between two levels. Or if you don't like the word beat frequency, it precesses and you can increase the precession. So if you're not increasing the impression, you can increase the energy of the precessing system by driving it at the resonance frequency which is a precession frequency. And the resonance is always the energy difference between two levels. So no matter how many levels you have in a classical system with a g-factor of 1, you're always driving it step by step, and each step is the resonance and this is the precession frequency of the system. Questions about it? I know people get confused all the time, but just think about it. I think I gave you all the different angles you can use to look at it. Questions about that? OK. Next thing I just want to point out is rotation. Yes, we are spending quite some time in class just to figure out that something is rotating. So what we figured out here is that if you go to a rotating frame, if you have angular momentum, and we go to a rotating frame, that that simply means that the effect of a rotating frame is to simply add a fictitious magnetic fields to the real magnetic field. Well, that's very handy. For instance, we know then immediately if you have a real magnetic field and we pick the fictitious magnetic field in such a way that the real magnetic field that the total field cancels it. It cancels the total magnetic field, well, then we have a system viewed from the rotating frame where the effective field is 0. Well, the spin is zero field does nothing and then we know when you go back into the lab frame, all this spin is doing it rotates. It's an exact solution. I want to use it today again but in a different way, but I want to just point out today we will not picking the fictitious field to make the magnetic field zero. Today we are talking about what happens to a spin in a rotating field. Complicated. A time-dependent problem, time-dependent Hamiltonian. But if you go in a rotating frame, if you rotate with the field, then in the rotating frame it becomes a time-independent problem which we can immediately solve. So today we use the same transformation again, but we pick our frequency not to cancel some static field. We pick our frequency to co-rotate with an external rotating field that in our rotating frame now everything is stationary. So that's what you're going to do today. And finally just to give you an outlook, just try to sort of make you aware that often what we're doing is the same thing in a different angle. So this is one way to deal with rotation. I'll give you an exact solution by going to rotating frame. I will later show you today that quantum mechanically the solution to the Heisenberg equation of motion for angular momentum or magnetic moment in a magnetic field is exactly a rotation. So we'll again show you that a quantum mechanical solution, the solution of a time-dependent problem is exactly rotation. And later on when we use the spin-1/2 Hamiltonian and we write down the wave function, we solve sometimes the wave function by transforming the wave function. And this will be, again, the rotating frame transformation. It's not always called like this, but it's always the same. You go to some form of rotating frame, and we'll do that in three different ways. This is the first way just as a general classical physics transformation to rotating frame. We will do it again for the expectation value as a solution of Heisenberg's equation of motion, and then we do it again when we transform the wave function with a unitary transformation. A lot of time for simple rotations, but its good thing. It really provides a lot of insight. Anyway, this is more sort of an outlook over today's lecture and part of Tuesday's lecture. Any question about the summary and the outlook? Because before I really come back to the rotating frame, I quickly want to do something I couldn't. I ran out of time on Monday. We talked so much about harmonic oscillators, the precision at which we can determine the frequency of the harmonic oscillator, and I really want to give you examples. I want give you two outstanding examples for atomic clocks. And the two extreme examples are, well, the two best atomic clocks in the world. One is the cesium atom with a fountain clock. Well, we'll talk about it later, but some of you know that the cesium atom is hyper fine structure. In one state, the electron and nucleus spin are parallel. In the other state, they are antiparallel, and the transition frequency is 10 gigahertz, well, 9.something, but for the matter of this discussion, it's 10 gigahertz. And the definition of time, the definition for 1 second is in terms of so-and-so many cycles of this transition. So this transition has this frequency, and for decades, this frequency was determined in an atomic beam. You have an atomic cesium beam, and you interrogate it with microwave fields. But now with cold atoms, we can achieve much, much longer interrogation times by almost completely eliminating the atomic motion. In the current experiment, is that you have a cloud [? lace ?] are cooled to micro Kelvin temperature. You launch it into a fountain. The cloud goes up and goes down, and you interrogate it twice and the interrogation time is 1 second. And this interrogation time is no longer as in the convention atomic clocks limited by the thermal velocity of the atom, it's limited by gravity. If you want to increase a time to 10 seconds, you need a 100-meter tower and nobody wants to build that. I mean, it would be a really big atomic clock. So therefore you usually deal with an interrogation time on the order of 1 second. So we know that based on Fourier's theorem, delta is a factor of 2 pi, but the line widths, if you would recall now the spectrum, the line widths, delta omega, would be that's a factor of 2, but it would be 1 over 1 second. And therefore the fractional accuracy is 10 to the 11, is on the order of 10 to the minus 11. However, the accuracy of the base cesium fountains is now 10 to the minus 16. So people are able to split the line to 1 part in 100,000, which requires exquisite knowledge, exquisite knowledge of the line shape and also systematic effects, but well, an atomic clock is a piece of art. And I just want to emphasize that I sometimes feel when I talk about the uncertainty at which you can measure a classical oscillator or such is almost trivial, and you should know all about it. But I can just say without looking at anybody that I just had a lunch discussion with some of my graduate student and we talked about laser stabilization. And one graduate student asked, but if the natural line widths of a transition is 10 megahertz, can we stabilized a laser to better than a megahertz? Of course, we can. I mean, here we have a natural line width of 10 to the minus 11, but we can get an a accuracy of a signal, which is 10 to the minus 16. So a microwave oscillator, which is the microwave accrual end of a laser can now be locked to the cesium transition with a precision of 10 to the minus 16, 100,000 times better than the line width. Let me give you a second example. Just noticing the red color. This is red. I wanted to highlight it. There's probably nothing we can do. It seems the projector is not showing the color. Well, in the lecture post, the lecture notes, this will be bright red, but maybe I should use more blue and green and yellow for highlighting it today. That's really odd. Yellow-- OK. The red is completely missing. The other example I want to give you is the strontium-- yeah, it's blue-- the strontium optical clock. And there was a really nice paper in Nature just a few weeks ago, and well, here is a level diagram of atomic strontium what happens is there is a very fast transition s to p transition for laser cooling and trapping and all that, but then there is a very, very slow transition [? forbidden ?] to a triplet state, which is meta stable. And this transition, those states have a very long lifetime, and therefore this transition is extremely level. So what is state of the art for the strontium clock? Well, in the experiment, they use an interrogation time. So they observe the atom for delta t, which is on the order of 160 milliseconds and putting in the 2 pis in the right place that would mean the frequency resolution is 1 Hertz. And you see that on the left-hand side when we record the resonance, the blue one is about 1 Hertz and there is something broader. This is when they are not actively feeding back the magnetic field. So for this fantastic position, you have to control everything, but the blue line is sort of what we record as a clock transition. It's about 1 Hertz. Now, compared to a cesium clock, the big advantage is that the strontium clock operates in the optical domain, and this is a frequency at 5 times 10 to the 14 Hertz. So we're seeing if the frequency is much, much faster even if you have a shorter interrogation time, your relative accuracy is better. And here the Q value the mu over delta mu is on the order of 10 times 10 to the minus 15. Fantastic. 15 orders of magnitude, and they are splitting the line, not as extreme as the cesium atomic clock by a factor of 100,000, by a factor which is on the order of 300. And with that, they have an accuracy, which is now really the record for the base performance of any atomic clock, which is 6 times 10 to the minus 18. Now, this is now in the optical domain. You may wonder about the laser, how stable is the laser. The laser is stabilized to an optical activity, but the laser because of thermal fluctuations, not terminal fluctuations, thermal fluctuations in the mirror because of thermal noise is limited in the short term to 10 to the minus 16. So the short term stability between 1 and 1,000 second is 10 to the minus 16. So they use a laser which has a stability of 10 to the minus 16. Every 1.3 seconds, they take a data point. Each data point or the spectral widths is 2 times 10 to the-- the laser is 10 to the 16. The line we record is 10 to the 15, but seen since the thermal noise is completely random and by averaging it, they can determine the line center to better than 10 to the 17. So I think this just illustrates the precision at the which you can observe the harmonic oscillator, and what I like about this example, it shows you that both the transition you are recording and the laser itself-- 10 to the 15, 10 to the 16-- are worse than the final precision of the measurement, which is better than 10 to the 17. Make sense, but yes, that's what it is. Questions about that? Yes, Collin. AUDIENCE: What was the accuracy of the [INAUDIBLE] result? Was it better than 6 times 10 to the minus 18? PROFESSOR: No. This 6 times 10 to the minus 18 is really the [INAUDIBLE] record. AUDIENCE: Is that the [? atomic ?] clock? PROFESSOR: There was an aluminum ion clock, which had, I think, 10 the minus 17 precision. But I think it's 6 times 10 to the minus 18 is close to that. So they are close or they are better now than the aluminum clock, but the aluminum clock is a single ion. You have to ever reach for much, much longer time to get this precision. So that's a big advantage for the strontium clock, which is many, many atoms in an optical lattice, and they say that they have improved on the best previous lattice clock by a factor of 20. But I think the improvement effect of 20, you always have to distinguish between sensitivity and absolute precision. In the absolute precision, you also have to control all systematic effects. And the big step here which was really boosting the absolute precision was they completely controlled the black body environment. So you really have to know with high precision what is the effective temperature of the black body radiation because at the 10 to the minus 18 level, it causes a shift of the atomic resonance. We talk later about it, but it's the ac Stark effect of the black body radiation, which becomes an important systematic at that level of precision. I don't know exactly the number of the previous atomic clocks, but this is sort of now the gold standard of frequency methodology. OK. Let's go. Yes, what you have here is you have sort of classical harmonic oscillator, but the classical harmonic oscillator is based on doing measurements on seamless strontium atoms and single cesium atoms. So you see that the accuracy, and we had a discussion on it last class, the accuracy at which you can observe a quantum mechanical oscillator. In a classical oscillator, it's pretty much the same. It's the same kind of-- if you have a good signal to noise, you can improve your precision for the line center substantially. From those results, and exciting atomic clocks, let's go back to classical physics. So we want to go back to our classical magnetic moment and understand the motion of it. But in addition to what we discussed so far, a stationary magnetic field, we now want to add to it a rotating magnetic field. So the situation is that-- just use another color-- we have a magnetic moment. We assume it's classical, and we have a magnetic field, which we assume points along the z-axis. And then, of course, we know that from our previous discussion that the spin undergoes precession. It's sort of precesses around the magnetic field at the Larmor frequency. And now, let's assume we have-- the yellow shows up. We have a rotating field. We add a rotating field eight B1, and just to keep things simple, we want to assume that the rotating field rotates at the same frequency as a magnetic moment, so we are on a resonance. Because what happens now is we can simply do a transformation to the rotating frame. The rotating frame is now at the Larmor frequency and we have just learned that in the rotating frame, of course, in the rotating frame the rotating field stands still, so it becomes a static field which points in the x direction. And we have the static field in the z direction, but now that's what I just reviewed. We have a fictitious magnetic field, which comes from the transformation to the rotating frame and on resonance, at the Larmor frequency, this is exactly the negative of B naught. So in other words, we started out to expose a magnetic moment to a time-dependent field. So we had a rotating field cosine omega Lt sine omega Lt. But in the rotating frame, this becomes now the x prime x, and it's stationary in the rotating frame. So this was the field in the lab frame. In the rotating frame, we have an effective field, which is the field in the lab frame minus the fictitious field, and the fictitious field was given by that. It just cancels the B naught component. And in the rotating frame, this vector and this vector cancels and we are just left with a field of strengths B1, which points in the x-axis, or actually the x-axis in the rotating frame is what I call x prime. So therefore, we have a very simple problem that in the rotating frame. We have a static field of value B1. And the good thing is, we know already what a magnetic moment does with a static field. The magnetic moment is just precessing around the static magnetic field. So therefore, our solution is now we have transformed the time-dependent field and now we know that mu precesses around this field, and the precession frequency is the Rabi frequency, which we discussed previously. The Rabi frequency is the gyromagnetic ratio times B1. So therefore, if we would start out with a magnetic moment aligned with the z-axis. If we would wait half a Rabi's cycle, the magnetic moment would now be inverted. So the situation is we have a magnetic moment which points in the z-axis with a field in the z-axis. But we expose it to a rotating field which rotates in the xy plane. So in the rotating frame, we have a stationary field which points in x. In this rotating frame, the spin is simply precessing around what is now a steady field, and after half a Ravi cycle, the spin points down. So therefore we know-- going back to lab frame-- that this rotating field has caused in quantum mechanics I would say a spin flip, a full reversal of the magnetic moment, and this is what we call the pi pulse-- it has already rotated the spin by pi-- or we call it a spin flip, but it's a completely classical system. Any questions? Well, then I would question for you. I've discussed with you the case that the rotating field rotates at the Larmor frequency. But now I want to discuss the case that the rotating field is not at the Larmor frequency, it's at the frequency omega 1, which is faster than the Larmor frequency. And the question is now what will happen to the spin or the magnetic moment. I explained to you that on a resonance, the magnetic moment was just flipping over, rotating precessing at the Rabi frequency. And I want you to think about it for a moment and then decide if we go off resonant, if we drive this system away from the resonance with a rotating field which is faster than the Larmor frequency, what is now the oscillation frequency of the magnetic moment. And so the choices are, is it larger, smaller, or the same? And this was, of course, compared to the Rabi frequency. So I've explained to you that the spin flip, the Rabi flopping, or the pi pulse-- and this picture was at the magnetic moment-- does Rabi flopping rotates plus z minus, z plus, z minus, z at the Rabi frequency. But now we drive the system faster than the Larmor frequency and the question is, is whatever this magnetic moment does, is it faster, slower, or does it always happen at the Rabi frequency? Larger. Good. Well, let me then immediately add another twist. What would happen-- let's ask the same question, but now we are driving it at a lower frequency. The yellow. This projector has a problem. So now same question as before, but instead of driving the system with a faster rotating field, faster than the Larmor frequency, we are driving it with a smaller frequency. Is now the response of the magnetic moment, the effective precession frequency, larger, smaller, or the same as the Rabi frequency as the resonant case? OK, good. So that means I can go very quickly about the explanation. It's correct. Whenever you are off resonant, this system precesses faster, so let me summarize what probably is obvious to all of you. What happens when we have an off-resonant rotating field. When we have an off-resonant rotating field, we go to the rotating frame, but of course, the rotating frame we go to is now not rotating at the Larmor frequency because the purpose of going to rotating frame is get rid of the time dependence of the rotating field. So we go to rotating frame which rotates at the frequency of the rotating field. So if the rotating field rotates at a frequency omega, we have a fictitious field, which is omega over gamma. Gamma is the gyromagnetic ratio. For the resonant case, we were just completely canceling the static field in the z direction, but for the off-resonant case, when omega is larger or smaller in both cases, this one here is no longer 0. And our total effective field is now the quadrature sum of what we have in the z direction, and what we have in the x direction or x prime direction, this is B1. So in the z direction, we have the static field minus the fictitious field and then the two are added up On resonance, the angle theta is 90 degrees, but for the off-resonant case, the angle is different given by the simple geometric result. And the effective field is the quadrature sum of B1 squared plus B naught minus-- and the fact is this adds something to the effective field in the rotating frame whether we drive it above or below resonance. So therefore the magnetic moment precesses at what is called the generalized Rabi frequency, which I now call-- this would be red-- let's use green instead-- the generalized Rabi frequency, which is gamma times B effective, and this is the quadrature sum of the detuning plus the capital letter omega, omega R, is the Rabi frequency at resonance, and this is nothing else than a measure for the drive field for the strengths of the drive field B1 in frequencies. So therefore, the generalized Rabi frequency is the resonant Rabi frequency added in quadrature with the detuning squared. Any questions? So because it's an exact result and I like the result Rabi flopping at the generalized Rabi frequency, I want to derive it for you. So I want to figure out what is the dynamic of a spin which is originally aligned, and now it undergoes-- it is driven by the rotating field. Remember, the resonant case was very simple. The spin was just doing Rabi flopping. It was fully inverted, came back, and just did this at that Rabi frequency. Now we know that in the off-resonant case, there will be an effective magnetic field and it will precess at a faster frequency, which is a generalized Rabi frequency, but since the effective magnetic field is not transverse, it has a z component, the spin will never fully invert. So geometrically, it's very easy. We start out with a magnetic moment at zero time, and I can immediately draw to you the complete solution. The complete solution is that in the rotating frame, this sort of precesses around the effective magnetic field. This is the solution. But I just wanted to do is because it takes me three or four minutes, I want to read from this graph, from this drawing, one of two trigonometric identities and derive for you the explicit expression, what is the value of the magnetic moment as a function of time. But it's clear from that it will have a maximum value. It precesses around the tilt direction, and when it's over there, it has a minimum value, but it will never completely invert. Well, quantum mechanically, if you drive a system not on resonance, you cannot completely invert the population, but we'll come to the later. So what do I need? Well, the spin is moving here on a circle. I need a few angles. So let's say the spin was here at one time. At another time, it is there, and that would mean that on the circle, it has moved an angle phi. The tilt angle between the spin and the magnetic field is what I call theta. And the angle between the initial magnetic moment and the magnetic moment at time, t, is what I call alpha. Yeah, these other the three relevant angles. The tip of the magnetic moment goes in a circle, and this circle has a radius, which is mu times sine theta. Sine theta is nothing else than the rotating magnetic field over the effective magnetic field, which is nothing else than the resonant Rabi frequency divided by the generalized Rabi frequency. I said what I want to determinant is the magnetic moment in the z direction as a function of time, and for that, I defined the angle cosine alpha. So the way how I derive it the easiest way, its geometry, its three dimension, and triangles, and all that. But the best way how I can describe it for you, let me introduce this auxiliary line, which connects the tip of the magnetic moment at time t equals 0 and at time t. And I call the length of this line A. And I want to derive-- I want to have now two triangles where one side is A, determine A in two different ways, combine the equation, we are done. So the first way is that we have the magnetic moment at time 0, the magnetic moment at time, t. We said the angle is alpha, and the two tips are connected by A, and you know in every triangle you have this would Pythagoras, but in the general case, this is valid for general triangle. So applying that to this triangle, we have a square. b square is mu square. c square is mu square, so we get 2 mu square. And then this term 2 ab is 2 mu times mu times cosine alpha, so therefore, this is 1 minus cosine alpha. So we've taken care of the first triangle. I hope the drawing is not completely confusing at this point, but why don't we just look at the drawing looking down the effective field. We look down at the effective field and then we see the circle where the magnetic moment precesses, and the radius of the circle I've already given to you. So now we want to look down the effective magnetic moment. We see the circle. We want to lock down the effective magnetic field. We see the circular where the magnetic moment precesses. It's has precessed form here to there. We connect this line. This was our A, and the angle at which the magnetic moment has precesses is phi. And the radius as we derived before was mu times sine theta. So now just using the same equation for this triangle, we find that A square is 2 mu square sine square theta times 1 minus cosine phi. And for cosine phi, I want to use the trig identity and express it by half the angle. All right, now we are done. We're pretty much looking at this-- the drawing is clear precession around a tilt angle, and we're just doing geometry here. And we have now two expressions for A square. We can set the two expression equal and solve for the unknown, which is cosine over [? phi. ?] And with that, we find the cosine alpha is 1 minus 2 sine square theta sine square phi over 2. And the purpose of this exercise was that cosine alpha tells us the tilt angle of the magnetic moment away form the vertical axis, so therefore, we have done what we wanted to do. We know the z component of the magnetic moment as a function of time is this times 1 minus generalized Rabi frequency squared times sine square. And now we know the precession phi, the precession at which the keep of the magnetic moment moves in a circle, we discussed it already before, and you gave the correct answer with the clicker, is this generalized Rabi frequency omega Rabi. So that's a nice formula, but before we lean back and look at it, let me just do one tiny step. Based on our quantum mechanical background, we can now define that the probability that the spin has been flipped is the relative difference in the z component appropriately normalized. This is just the normalized change in the z component, and if I call that the probability that, that which is on the left-hand side, then I find that the probability for this classically expression for which expresses how much the z component of the magnetic moment has changed, and this is exactly the celebrated formula for spin flips in a spin-1/2 system. So we have derived exactly the solution for the motion for the precession of a magnetic moment in a magnetic field plus the rotating magnetic field, and what we found is, we found that the magnetic moment precesses at the generalized Rabi frequency. And as I will show you next week, this result is exactly the same as for quantum mechanical expectation values. Question, yeah? AUDIENCE: So what if we have magnetic field oscillating just in one direction instead of rotation magnetic fields? PROFESSOR: Oh, that's a much more complicated problem. What happens when we have the magnetic field which is linearly polarized, which is only oscillating in one direction? Well, light or a vector which is oscillating linearly in one direction can be regarded as a superposition of a left and right rotating field. In other words, if you superimpose left-handed and right-handed circular polarized light, the sum of the two is linearly polarized light. So now we have actually the situation that linearly polarized magnetic field-- that's what we usually do when the lab. I mean, we have coils. We connect them to synthesizer and the field is not going in a circle. It's going back and forth. It's linearly polarized. This corresponds to a magnetic field which corresponds to two magnetic field, one rotates left and one rotates right. But the problem is if you now do a transformation in the rotating frame, do we want to rotate omega to the left or omega to the right? So what we can do is, we can pick our rotating frame and we are now in the rotating frame. One of the rotating fields has become time independent, the other rotates now at 2 omega. And at that point, we need the celebrated rotating wave approximation that we keep the one term [? via ?] rectified and the other one at two omega rotates so rapidly that we say these rapid oscillations do nothing and we discard it. We'll discussed it later in this course. But the gist is, if you have linearly-polarized light, linearly-polarized magnetic fields, we usually have to do an additional approximation, the rotating wave approximation. And since the rotating wave approximation is done always in almost any treatment, any paper you can find, we think it's always necessary. But what I've shown to you is, when we have a rotating field, we don't need any approximation. The transformation in the rotating frame is exact, but that's the beauty of it the when we assume rotating frames, we can hold on to exact solutions for longer and only later then discuss what happens when we introduce linearly-polarized magnetic fields. But that's something we definitely do later in its full beauty. 15 minutes. OK. There's one thing I want to do about classical spins and then we do with the full quantum treatment. And this aspect of classical spins is called rapid adiabatic passage. So we want to add one more piece to our discussion. So far we have assumed a static field and a rotating field, which rotates at one frequency. But now we want to change the frequency of the rotating field. So we have our magnetic moment in a static magnetic field, which is our quantization axis. And now we have a rotating field, but we increase the frequency of the rotating field from slow to fast and ask what happens. And the result is that by increasing the frequency, sweeping the frequency through the resonance, we can do something very useful. We can invert the spin. We can turn over the magnetic moment in a very robust way, and this is the concept of rapid adiabatic passage. A lot of you may be familiar with the concept a Landau-Zener transition, what I'm telling you what is exactly the classical counterpart of the Landau-Zener transition. actually in many cases when it comes to spin physics, the classical physics and the quantum physics is the same. So that's why I want to discuss rapid adiabatic passage and Landau-Zener physics first in the classical environment. So rapid adiabatic passage is a technique for inverting turning around spins or magnetic moments by sweeping the frequency of your drive field across the resonance. So adiabatic means that this frequency sweep has to be slow. Slow is slow compared to the Larmor frequency. We get any given moment the magnetic moment precesses at the Larmor frequency, which is the gyromagnetic ratio times g-factor magnetic field, so it's sort of quasi stationary. The spin precesses around the effective magnetic field and we want to change the frequency of the drive slow compared to that motion. Well, the word rapid adiabatic passage has the word adiabatic, which means slow but also rapid. The word rapid means we have to be rapid compared to relaxation processes, which we do not discuss here in an idealized environment. For instance, if you do rapid adiabatic passage in the environment where the atoms can collide, rapid means you have to do it fast enough before you have decoherence in two collisions, so slow compared to the Larmor frequency and rapid compared to all the things I'm not mentioning here-- rapid compared to decoherence and relaxation processes. I will not set up differential equation and solve them. I want to give you the intuitive picture of what goes on, but then also derive what is sort of the criterion for adiabadicity, which we have to fulfill. So what are our ingredients? We have a magnetic moment, mu. We have a static field B naught. We have a drive field B1, which rotates at a frequency omega. And we assume that the rotating field is smaller than the static field. It's not absolutely necessary, but you apply a big static field and then you have a repetitive drive. That's a standard situation. Or we always want to have quantization axis, which is given with the z-axis as defined by the static magnetic field, but it's only defined by the static magnetic field if the transverse field is not much larger than the static field, otherwise, we are talking about a somewhat different problem. And let me just assume to be specific we later discuss what happens if it's not the case. We assume that we start with a frequency omega, which is much, much smaller than the Larmor frequency, much, much smaller than the resonance. So what does that mean for our effective magnetic field? Let me just sketch it for you. Remember our effective magnetic field, we have a field B naught. We have a drive field B1, which we assume is smaller. But then when we go to the rotating frame, we have a fictitious field, but if the frequency omega is below the Larmor frequency, this fictitious field is very small. So therefore, we start out in a situation where the effective field is pretty much pointing along the z direction. So just to remind you, so we have a situation where the effective field is just at a tiny angle, and if we started out with our magnetic moment aligned in z, and we assume B1 is really perturbative, pretty much the magnetic moment is very tightly coupled. It has a very small precession angle, or if you want, if you take the magnetic field B1 to be perturbative, you can say the magnetic moment is aligned with the effective field. That's the limit that the cone angle of precession is very small. So let we write that down. This is what's omega much, much smaller than the Larmor frequency. So now we want to turn up the knob on the frequency of the drive. We want to rotate the drive field B1 faster and faster. And the picture you should have is-- I just wiped it away, but what that means is, the effective field, the fictitious field is no longer and longer component and on resonance, the fictitious magnetic field will cancel B naught. So in other words, at this point, when B naught is canceled, the effective field has only the B1 component in the x direction. So therefore, when be go to delta equal 0, the effective field is only B1 and points in the x direction. So what we have done is by changing the frequency, by ramping up the frequency to resonance, we have tilted the effective magnetic field from the z direction into the x direction. And at any given moment, I mean, we know what the exact solution of the magnetic moment is. At any given moment, the magnetic moment precesses around the effective magnetic field. And if the precession is very fast and the effective magnetic field is slowly rotating, the magnetic moment is just following. So therefore, at this point when we are at resonance, we have tilted the magnetic moment by 90 degrees. Just one second. And if we go with a frequency much higher than the Larmor frequency, then our fictitious magnetic field is much larger than B naught and therefore, the effective magnetic field points now in the minus z direction. So the idea is that as long as this rotation of the effective field from being plus z in the x direction and into minus z is slow enough, the rapid precession is locking, is keeping the magnetic moment aligned with the effective magnetic field and we have just a handle we invert. We move around the magnetic moment. So in the adiabatic limit, the spin precesses tightly, and by tightly I mean the angle theta is small around B ef, B effective, and follows the direction of the effective magnetic field. So we are rotating an effective field. We are rotating the magnetic moment, but we're not rotating anything in the laboratory. The only thing we are doing is, we are changing the frequency of the rotating magnetic field. Questions? Jenny. AUDIENCE: Oh, yeah. I was thinking, can you also do this by keeping the frequency the same and just ramping up the strength? Like say, put it at the-- make the B1 equal to the frequency of that Larmor frequency and just start from 0 and ramp up? PROFESSOR: Yes. I mean, the essence here is that the effective magnetic field is [INAUDIBLE] resonance, and what you are suggesting is if the frequency is constant, that would mean the fictitious magnetic field is constant. But if you were given a fictitious magnetic field and we a huge field B naught, the effective magnetic field points out, points up. But if we now make this static field B naught smaller and smaller, both through resonance and make it even smaller, we have also done an inversion of the effective magnetic field. The result is the same. Actually, sometimes in the laboratory, if you have a synthesizer which is not easily computer controlled, we do an analog sweep of the magnetic field, so we actually change the Larmor frequency of the atom instead of changing the drive frequency. What really matters in the whole business is the relative frequency between the two. AUDIENCE: Is it necessarily true that the spin resistance is tightly around the effective fields? Isn't it at resonance the radius of the circle in precession is equal to mu? Is it my [INAUDIBLE] before we found the radius which was mu sine theta. Then sine theta cosine Omega R-- PROFESSOR: Yes. AUDIENCE: --divided into-- PROFESSOR: But the way is we start-- the way we've started it up here. Now if we are far away from resonance, the angle theta is infinitesimally small. Something confuses you. AUDIENCE: Yeah. I mean, it's just [INAUDIBLE]. PROFESSOR: See, what we have solved before, we have done something else before. What we have discussed before is, that we have a spin which is aligned and then we looked at the-- you can say the transient solution, we switched suddenly on the rotating field. So that may sort of described that. We have a magnetic field, which is in the z direction. Obviously we don't. Our spin is aligned in the z direction. And if you then suddenly switch on a rotating field, that could mean you have suddenly created an effective field, which is tilted. So then by the sudden switch on of your drive, you have an angle theta and the magnetic moment precesses with the precession angle theta. But what we are here doing is, we are ever so slightly changing the angle theta and then the spin stays aligned. That's the difference. So before I give you another example and discuss the limit of adiabadicity, let's just have a quicker question to make sure that everybody follows. So OK, what I think we have at least understood in the adiabatic limit is, that when we start with a magnetic moment, which is aligned, and now we do a sweep, we start with a small rotating frequency and we make it large. We sweep from low to high frequency through resonance, then we can invert the magnetic moment. My question for you now is, what happens when we start with a spin but we switch on a drive, which is a buff resonance and we sweep it with small frequencies? What will now happen when we do the opposite sweep? Will the spin just stay, will it be flipped, or will it sort of get diffused or get disoriented? So this is answer A, this is answer B, and this is answer C. So what happens? I have assumed in already discussion before that we start with a very low frequency where the fictitious magnetic field is negligible. And then we change the frequency, we sweep the frequency. It cause the Larmor frequency to very high frequency. My question now is, what happens when we reverse the frequency of the sweep? OK. There is some distribution. The correct answer is indeed B, we flip the spin. What happens is the following. The spin is in the z direction, but at high frequency, the fictitious magnetic field is very large. So therefore, the effective magnetic field is now a large effective magnetic field pointing in the minus z direction. So now we have a situation where the spin tightly precesses around the magnetic effective magnetic field at an angle of 180 degrees, not 0 degrees, but 180 degrees. And now, when we change the detuning, the effective magnetic field tilts, but the spin keeps on precessing at 180 degrees and eventually when we sweep through the resonance, the effective magnetic field at low frequency of the rotating field becomes the real magnetic field is now pointing in the z direction and the spin has followed the 180 degree rotation. So in other words, it does not matter whether you go from low to high or from high to low frequency. Whenever you go through the resonance, you flip the spin. Let's close that Let's see. It is that. So the more generalized answer is that the rapid adiabatic passage always swaps the spin state no matter which way you sweep. When you start in spin up, you wind up in spin down. When you start in spin down, you wind up in spin up because what you're doing is you're inverting the direction of the effective magnetic field and the spin just follows. So therefore it goes both ways. It applies to-- you can say to the ground state with the lowest energy state, it applies to the highest energy state, and as such, it is actually a swap operation like the pi over 2 pulse. When you have a pi over 2 pulse, you have a spin and you just pulse on a rotating magnetic field, which in the rotating flame means you have an x field. When the spin was up at the Rabi frequency, rotates down. When the spin was down at the Rabi frequency, it rotates up. And quantum mechanically it just means you take the population in two states and your unitary time evolution is a swap operation. And the classical counterpart is what we just discussed, the rapid adiabatic passage. However, there is one big experimental advantage of doing rapid adiabatic passage over pi over 2 pulse. And I think many of you use rapid adiabatic passage in the lab. Remember, if you give yourself the drop, you want to transfer the magnetic moment form spin up to spin down. If you want to do it with the pi pulse-- if you want to do it with the-- I said pi over 2 pulse, I meant pi pulse. The only way how with a pulse you can rotate the magnetic moment by 180 degrees, if you are exactly on resonance, where the fictitious field cancels the static field and what you have is in the rotating frame is only a field in x. And then you can rotate around the x-axis. But if for some reason you're not exactly on resonance, then the fictitious field does not cancel the static field, and your effective magnetic field is at an angle. And if you rotate an effective magnetic field, which is not in the x-axis but at an angle, you cannot do a full inversion of the magnetic moment. So in other words, if you want to use a pi pulse to flip over a spin, you have to pulse on your drive exactly on resonance. And if you have MBN magnetic fields which drift by a few milligals and you're not sure where the resonance is, you cannot do a perfect pi pulse. But with the sweep, you just have to sweep from point A to point B, and if you know you cross the resonance, you have a perfect inversion of the magnetic moment, which is robust against frequency drifts. Of course, you have to do a longer sweep. The pi pulse is the shortest possible wave. If you heat the cloud on resonance, you first sweep, nothing happens, nothing happens, and you go through resonance and just view further and further, you waste some of your time. So you pay a price for it, but often we want precision and we have the time to do it and then the job is done by rapid adiabatic passage. Let me just mention one more thing and then I stop. I have discussed the physics of keeping-- with rapid adiabatic passage, I've discussed with you the physics that rapid precession keeps a magnetic moment aligned with an effective magnetic field. Let me now discuss the same phenomenon but in a very different environment. And this is a similar process happen in a magnetic trap. In a magnetic trap, we don't have any drive or we have a time-dependent field. But what happens is, we have a magnetic field, but the atoms move through the atom trap. So the atom sees a changing magnetic field. And in many of our experiments here at MIT, we use a quadrupolar field. We've discussed some of these aspects in 8.422, so a quadrupolar field, the field has to be homogeneous in order to provide trapping forces, so we often use quadrupolar fields, a lot of advantages. That's how we build the tightest magnetic traps. But what happens now is, if an atom moves along this projector, it moves up in the laboratory, here t let's say the atom is in spin up. Here it's now a aligned. The magnetic field is opposite to the spin and now it moves up, and now the magnetic field up here is pointing up. The physics I explain to you rapid adiabatic passage means that the rapid precession off this spin means that as the atom moves, the spin stays aligned with the magnetic field. So you find the same physics here in a different environment, but the mathematical description is the same. Of course, and that's my last comment for today, if you have a spheric quadrupole trap and you go right through the origin, you're out of luck because here, the atom sees the magnetic field is down. It gets smaller, the magnetic field gets smaller, gets smaller. The magnet field gets zero. Oopsy, the magnetic field points in the other direction. And there was no warning. The magnetic field has jumped from 0 degree to 180 degree. There was never, ever any transverse field around which the atom could precess and change its orientation. So therefore, when an atom is aligned with a magnetic field moves through the origin, oopsy, it's anti aligned. It has lost its orientation with respect to the magnetic field, and this is the breakdown of rapid adiabatic passage because there's no adiabadicity. It is not an adiabatic change of the direction of the magnetic field, it's a sudden field. And the consequences are bad. You lose your atoms from the magnetic trap. It's called Majorana losses. A lot of people know what I'm talking about, but I'm not explaining it in detail. Time is over. Any question about what I've discussed? Yes. AUDIENCE: [INAUDIBLE]. PROFESSOR: You said the weights of the-- AUDIENCE: Yeah, so [INAUDIBLE]. PROFESSOR: I would say-- the question is about the frequency weights and if you switch on the frequency drive, it's not a delta function. If we are far away from resonance, it doesn't really matter. That is, of course, a criterion that the effective weights of the frequency should not be comparable to the detuning. So then you switch it on, but you're so far away from resonance that it doesn't matter if you have a small weights, the effective detuning is still large. You scan for resonance. But these are sort of the boundary condition. The exact solution in the Landau-Zener problem, for instance, assumes you go from minus infinity to plus infinity in the detuning. Nobody does that. So we're discussing sort of sweep finite duration effects, but usually we are pretty close to the idealized assumption. OK, enjoy the Monday holiday. We meet on Tuesday, and Tuesday is in building 37 in our standard classroom.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
2_Resonance_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: We want to start talking about a seemingly simple but very complex system in physics, the harmonic oscillator. So the next part is actually due to Professor Vladan Vuletic, who worked out the topic very nicely about, how precisely can you measure frequencies? And I don't need to remind you that some of the most accurate measurements in all of physics are done by measuring frequency. It's actually a kind of unwritten rule. If you want to measure something precisely, make sure that you find a way that this quantity can be measured in a frequency measurement. Because frequencies, that's what we can measure-- with synthesizers, with clocks, and such. So therefore, the question, how precisely can you measure frequency, is a question which is actually relevant for all precision measurements. Well, we know we have Fourier theorem. If you have an oscillator which oscillates for a time delta t, we have the Fourier theorem which says we have a finite width of the frequency spectrum delta omega, or there is the spread of frequency components involved in such a way that delta omega times delta t is large or equal than 1/2. The case of 1/2 is realized for Gaussian wave packets. And of course, Fourier theorem should also-- this Fourier limit should remind you of Heisenberg's uncertainty relation. Of course, Fourier limit and Heisenberg uncertainty relation are related because what Heisenberg expressed turns out to be simply the limit due to the wave nature of matter. OK. So I brought the clicker because I want you to think about some seemingly simple question. The first question is whether the uncertainty delta omega times delta t larger than 1/2, does this uncertainty hold for purely classical systems? So think about it and answer in your clicker. A is 1 and B-- oh, I should say that. So we now assume we have a purely classical-- you can even think a mechanical, large mechanical object, purely classical harmonic oscillator. You can observe it for time delta t, or it oscillates for time delta t. Can you determine its frequency better then this uncertainty suggests? 19, 20. Just make up your mind. As I said, your responses are not recorded, so nothing to risk. OK. Yes. OK, the majority gave the right answer. The situation is that the answer is yes, if you have a good signal to noise ratio. So what happens is, if you-- we have to bring in the fact that we have noise. So if you have a wave formed, there may be noise around it. And if you look at the spectrum, the spectral components have a certain width. And this width, delta omega, is given by Fourier's theorem by the time delta t you had for observation. But as you see, you can determine the center of this spectral peak with an accuracy delta omega, which may be much better than the [? width ?] delta omega. And the rule of thumb is that you can split a line by your signal to noise ratio. So typically, the accuracy of a measurement is whatever the width of the spectrum is, and then you can split the line by the signal to noise ratio. This is called splitting the line. Factor of hundred is usually regarded as straightforward. But if you want to go to larger than that, it becomes a challenge. Because even if you have a very good signal to noise ratio, you really have to make sure that you know the line shape, that you know that it is, for instance, symmetric, and that the line center of the observed shape is really where the frequency is. I will give you some examples in a few moments. But before that, I would like to continue our discussion whether we can measure the angle of frequency to better than the Heisenberg limit in case of a quantum-mechanical oscillator. So the question is, can you measure the frequency of a quantum-mechanical harmonic oscillator in a time delta t to an accuracy which is better than the limit of Heisenberg's uncertainty? OK, your vote, please. All right. OK. Why don't we hold that for a second and proceed to the next question which is the same? But now, instead of a quantum-mechanical harmonic oscillator, we take something we are very familiar with-- an optical laser-- and we observe a laser pulse lasting a duration delta t. So same situation, but instead of observing a quantum-mechanical harmonic oscillator, we observe a laser pulse. For the laser, can we measure the frequency of the optical radiation better than this equality tells us? OK. At least people are consistent because the first thing I wanted to tell you is that, is the laser actually a classical harmonic oscillator or quantum harmonic oscillator? Well, we use a quantum description of light and the laser is the population of photons in the single note of the electromagnetic field. So in that sense, the laser is fully quantum. But in the limit that the laser is many, many photons-- and some of you know about coherence states. If there's a coherent state with a large photon number, the laser is actually the classical limit of an electromagnetic field. So maybe that tells me that the answers to questions 2 and question 3 should probably be the same. And I want to say more about it. So at least in this class, you were consistent. I have often seen a big discrepancy in the answers between question 2 and question 3. OK, so the answer is yes. So both for the laser-- I first explain the laser to you, and then we go back to the pure quantum system, thing has actually certain subtleties. But for the laser, it is obvious, at least if I tell you how I want to measure it, because I can take the laser and create a beat note with another very stable laser. And I record this beat note on a photodiode. I can realize by making this other laser, the local oscillator stronger and stronger, I can create a beat note which is larger and larger and corresponds to a macroscopic electric current which can be measured with very high precision. So you can realize an arbitrarily high signal to noise ratio by using a strong local oscillator. And then you can actually say the photocurrent, which comes out of the photodiode, is actually-- you can regard this microscopic current as purely classical. And then of course, the answer to the first question applies. So that takes care of question 3 by mapping it actually on question 2. But now by saying that the laser also has a quantum-mechanical limit and I'm not changing anything, we realize that probably the answer to question 2 should also be yes. So let's, therefore, ask our self, what is the situation when the Heisenberg uncertainty relation applies? Well, one is we have to be really careful. It predicts the outcome of a single measurement on a single quantum system. Let me write that down. Or if the Heisenberg uncertainty relation sets a limit, how well we prepare a quantum system. It's about a single quantum system, and then we perform a single measurement. So in a sense, if we would say all you have is a single photon, which is a very special quantum system. You have a single photon and you measure the frequency of the photon only once. Then your will find the limit, which is the Heisenberg limit. You cannot, with a single measurement on a single photon, determine the accuracy of the frequency better than this. And of course, you can get higher accuracy by doing repeated measurements or by using many photons. We talk about it more in 8.422, but I just want to remind you, if you have n uncorrelated photons. In other words, we perform n measurements on n different objects, then the signal to noise ratio is-- just by Poisson distribution, square root n. And therefore, the resolution for the frequency of the photons is better than the Heisenberg or the Fourier limit by 1 over the square root n. Some of you-- and actually, in Professor Vuletic's group, there is research on it that if you have correlated-- well, in his case, correlated atoms. But if you had correlated photons, then you can even do better. You can reach what is sometimes called the Heisenberg limit where you are better than the limitation given by Fourier's theorem or by the Heisenberg uncertainty relation by a factor of 1/n. OK. So as far as the question 2 where the quantum harmonic oscillator is concerned, we would say the answer is yes, if you have is single photon at frequency omega 0, which interacts with the quantum harmonic oscillator at frequency omega 0. However, the answer would, even in that case, be no if you have harmonic oscillator levels and you take a photon and by a nonlinear process excite the n-th level. So you have a single photon now. You can resolve the energy delta E of this level. A single photon, a single quantum object, you can define the energy-- that's Heisenberg's uncertainty relation. The energy is determined to that precision. But your frequency omega of the optical pulse is n times omega 0 using nonlinear process. And then you can determine the frequency of the harmonic oscillator, even for a single quantum system and a single photon with a precision which is 1/n times better. So you have to be also careful, but I don't want to beat it to death now, to distinguish between the accuracy at which Heisenberg's uncertainty relation maybe limits the measurement of an energy level. And how this is related to the frequency of the harmonic oscillator. And by going sort of immediately to the n-th level, you can, of course, measure the distance between two levels more accurately because you have increased your [INAUDIBLE]. Any questions? AUDIENCE: Sorry, I just got a little confused about when yes means one thing and no means one thing. So you're saying that you can beat the uncertainty relation in questions 1, 2, and 3 if you can put it in a way where you get good signal to noise? PROFESSOR: OK, I gave you-- sorry for being complicated, but the physics is complicated. I try to give it to you in different layers. I first looked at the classical limit, which is pretty clear-cut. Then, I used a laser. The laser has a classical limit where the answer is the same as in the classical limit. But then we can talk also in the laser in the limit of single photons. And then I said, OK, the single photon interacting with a single quantum system, this is really when it is quantum. And if you have a two-level system and you [INAUDIBLE] it with a single photon, then you can make a measurement which is limited by this inequality. But then I said there is a caveat. And this is if you bring in a nonlinear process. So if your bring in a nonlinear process, we can go up n steps. We can drive-- we can, by some nonlinearity, drive the harmonic oscillator from the ground state to the n state. Then, everything we have said about a single photon and the measurement of the resonance and such applies to this photon. But the energy level of the harmonic oscillator has now been measured with n times higher precision because we can divide by n. So the answer is yes, yes, then quantum mechanics we cannot make it more accurate unless we pull some tricks. And nonlinear physics would be a trick. So in general, the situation where you really limit it by this inequality where your precision is limited would really only apply to a single photon, a single quantum physics, and linear physics. Other questions? AUDIENCE: Yes. Maybe just the phrasing. AUDIENCE: What does delta t mean for a single photon measurement? PROFESSOR: Delta t could be the time you allow yourself to make the measurement. You have a measurement apparatus. You switch it on, you switch it off. Eventually, you want to get out of grad school. I mean, you don't want to take an infinite amount of time for the measurements. There's always a window, delta t, and there's a fundamental limit. The duration of the measurement limits the precision of the measurement. OK. The next thing I want to discuss is the analogy, but also the differences between a harmonic oscillator and a two-level system. So what is a two-level system? Well, it's a system with two levels. What is a harmonic oscillator? Well, it's a system which has an infinite number of equidistant levels. I will tell you tell you later in this course when we talk about the AC and DC stark effect, you talk about the polarizability and light scattering that you can regard the atom or the electron in the atom as a harmonic oscillator. An atom scatters light exactly in the same way as the charge which is connected to some support structure with a spring. How an oscillating charge would scatter light? Well, you know of course, the atom is a two-level system. And the sort of model I make for the electron as a harmonic oscillator at a single resonance frequency, which is 100% exact in the limit of [? low ?] laser power. Well, this realize is a harmonic oscillator. So therefore, what I'm telling you by this example, that there are situations where a two-level system and a harmonic oscillator are the same. Or, create the same [? observance, ?] create the same physics. Do you have any idea when the two systems may look the same or when the two systems react exactly in the same way to, for instance, external radiation? AUDIENCE: At very low temperatures? PROFESSOR: At very low temperature? Well-- AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes. Well, we assume these are atoms and we always start in the common state. So let's assume we have 0 temperature. We have an atom. And maybe what I'm asking is, if I excite the atom and I said, there may be a situation where the atom is a two-level system but it reacts like a harmonic oscillator, when does it break down? AUDIENCE: When the [INAUDIBLE]. PROFESSOR: Exactly. When we go beyond the perturbative limit when we use a strong excitation. So in other words-- and I like to give you the answer before I give you the full explanation, which now comes. If you start out in the ground state, you can see at 0 temperature, we have mainly all the population there. If you start now driving the system, we put-- and I will say a little bit more about it. A little bit into the excited state. But it is the nature of a harmonic oscillator when you put something into the excited state that immediately a little bit goes into the second excited state. And this is, of course, something which you can only do in a harmonic oscillator but you cannot do in a level system. So to the extent that we have weak excitation and we can neglect the excitation in higher levels. To that extent, a two-level system and harmonic oscillator are identical. Actually, what I'm saying appears trivial, but I really want you to think about it. It's actually a very profound statement. When can you describe a quantum-mechanical system as a harmonic oscillator? For weak excitation when all what matters is that you have put a small fraction of the system into the first excited state. And you immediately realize that the feature which distinguishes a two-level system from a harmonic oscillator is the phenomenon of, let's say, saturation. You cannot go higher. If you do not saturate a two-level system, it behaves like a harmonic oscillator. And therefore, it behaves completely classical. OK, let's work that out a little bit. I'll give you some examples. So the phenomenon of a two-level system is it has saturation. The maximum energy you can put in is one quantum. Whereas, a harmonic oscillator can never be saturated. Just think of the harmonic oscillator potential parabola. You can drive the system as high as you want. So you can go in this classical language to arbitrarily large amplitudes. So what I just mentioned where the equivalence holds-- I always want you to have an example in mind-- is the Lorentz model for an atom where you describe the atom as an electron connected with a spring to the nucleus. And as we will see in a few weeks, this model gives the identical answer, identical to the quantum-mechanical treatment for probabilities like the polarizability and the index of refraction for gas of atoms or molecules. So if you have a two-level system, we can often think we have an s, ground state, and then p, excited state. And if you do a weak excitation, we have sort of a wave function, which is all, lets say, positive one side-- one sign. And then we [INAUDIBLE] a p orbital, which has a note which is positive and negative. And now we have the positive-negative and the positive and the-- there is the resonance frequency between the two. And that together results in an oscillating dipole. So the simple model of superimposing an s state and a p state at a certain frequency gives us an oscillating dipole, which is the realization of a harmonic oscillator. But the harmonic oscillator, it oscillates. And this is, of course, valid for sufficiently small excitation. So the question we have already addressed in the discussion, what is small? So "small" means population of higher-excited states is negligible. So in other words, as long as the excitation of the first excited state is small, then we can neglect the excitation in the second excited state, which is even smaller. So let me kind of bring out the difference between a two-level system and a harmonic oscillator a little bit more by discussing the situational of cavity QED. Let's assume we want 100% population in the first excited state. If you have a harmonic oscillator and the system is prepared in the first excited state, this is also called Fock state with one quantum of excitation. And it's a rather special state where people have worked hard to generate it because you cannot realize it in a harmonic oscillator. And let me sort of explain that in the following way. If you have a harmonic oscillator, you start and you would drive it. And you try to put 100% in the n equals 1 state. Before you have accumulated 100% in the n equals 1 state, you drive it already to higher states. And of course, you know when you start driving a harmonic oscillator, classical or quantum-mechanically, you create a coherent state, which is a superposition of excited states. So we would say an n equals 1 state cannot be excited. We usually get a coherent state, which is a superposition of many, or at least several, states. Whereas in a two-level system, we can just do a [INAUDIBLE] pulse. And to put all the atoms in the excited state is nothing special. Whereas, to have a cavity filled with photons and selectively excite the n equals 1 state, this is special because it's not easy. It's not straightforward. So in cavity QED, you can do it if you have anharmonicities or nonlinearities. So let me explain that. Well, it's an anharmonicity or some form of-- so if you have a situation where you have your harmonic oscillator, but the energy levels are not equidistance. So the difference between this first and second excited state are not the same, then you can drive the system. You can prepare Fock state in n equals 1 like in a two-level system. And you're out of resonance to drive it to higher states. So here, what you utilize is a sort of two-level system. And that allows you to create those special state which are regarded as non-classical, very special states of the harmonic oscillator. And one way how you can create it-- well, if you have an empty cavity, each photon has the same energy. Then you have an equidistant harmonic oscillator. But if you put, for instance-- you add an atom to the cavity, and the radiation is interacting with the atom, then you'll get-- we'll talk about it later. The atom and the photons interact with the Rabi frequency, and then you get a splitting, called the normal mode splitting. And this level splitting is proportional to the Rabi frequency. And we'll discuss it later, but many of you know that the Rabi frequency scales with the square root of the photon number. So therefore, you have a splitting which is proportional to square root 1, square root 2, square root 3. And you have a spectrum which is no longer an equidistant system. And then you can create non-classical states of the photon field, non-classical states of a harmonic oscillator. So anyway, I thought I wanted to bring it up at the beginning of the class, because a lot what we are discussing in this class is we'll rediscover in many situations-- in atoms, in the light, in the way how light and atoms interact, harmonic oscillators and two-level systems. Often, I say they are the same. They behave in the same way. But I hope this introductory [? mark ?] tells you, when can you think in one limit and when do you have to apply the other limit? Any questions about that? Yes, Nancy. AUDIENCE: So are we saying here that without changing the [INAUDIBLE] of the harmonic oscillator, we cannot use it as a harmonic oscillator? Like [INAUDIBLE]? Because when you put an atom in that [INAUDIBLE] changed, it was no more a harmonic oscillator. The levels changed. PROFESSOR: Maybe all I'm saying is this is a pure harmonic oscillator. And in a pure harmonic oscillator, I think it's-- I don't know a proof of it, but it seems impossible to prepare a system in the first excited state because every attempt to put an excitation into the system would carry it higher up. You would create a wave packet. You would create a superposition. So you have to do some thing. You have to break the degeneracy of the spectrum of the harmonic oscillator. Of course, what you can do is you can put in an atom. You can use the atom as an aid to just put in exactly one photon into your cavity, and then you can remove the atom. Then you are back to an ideal harmonic oscillator, but you have overcome the limitation of the harmonic oscillator in preparing certain states. Another take-home message you may take from this discussion is harmonic oscillators-- yes, we have quantum harmonic oscillators. But even the quantum harmonic oscillator follows a classical description. So the real quantumness-- what makes quantum optics quantum optics and cavity QED a wonderful example of quantum physics is the physics embedded in a two-level system. That we can put one quantum excitation into something, exactly one is as much quantum as you can get. This is realized in a two-level system and this is related to the phenomenon of saturation. You can saturate a two-level system, but you cannot saturate a harmonic oscillator. So with that, let me make the transition to another simple system. And we want to spend some time on it. And these are rotating systems. So a system which rotates. Well, what do you think? Will it behave, using the discussion we just had, more like a classical system or more like a quantum-mechanical system? Of course, I gave you a very special definition. What brings out quantum mechanics in a system? The harmonic oscillator is always linear. You can drive it as hard as you want. You drive it hundred times stronger and the reaction is hundred times more. Everything is linear. The quantumness of a two-level system comes from saturation. What about a rotating system? Something which can go in a circle. AUDIENCE: [INAUDIBLE]. So you're not going to have this degeneracy [INAUDIBLE]. PROFESSOR: OK, very good. You're immediately applying what is the spectrum. The spectrum is not equidistant, so it should bring in a difference. It's sometimes hard to ask a simple question without giving the answer away. But what I had in mind was a gyroscope, a gyroscope which is precessing. And what I wanted to sort of lead you with the question is, if you have something which rotates, the amplitude is limited. A rotating object, let's assume a magnetic, classical magnetic moment. It can have a precession angle which is 180 degrees, but that's a maximum. In other words, the rotating system when you excite it has a maximum amplitude, exactly as a two-level system. So that's what a rotating system and a two-level system-- actually then, I'm now specializing on more rotating gyroscope. If you have a free rotator, this, of course, can rotate with [INAUDIBLE] angular momentum and the excitation spectrum would not be bound. So I think I have to rephrase the question the next time I teach the class. I wanted to ask you here about a special rotating system, which is a precessing gyroscope. So rotating system. If you think about precessing gyroscope, it has a bound on the amplitude it can be excited. So what I want to show you, today and in the next lecture, is the motion of classical magnetic moments. When you think about the motion of classical magnetic moments, think about a compass needle, a magnetized needle which has angular momentum. And then, the system is acted upon with a magnetic field. So this is our system. And if angular momentum, magnetic moments, and torque come into play, we have the physics of classical rotation. But the excitation spectrum here is limited because you can flip a compass needle and this is a maximum excitation. When the North Pole points in the opposite direction, that's the maximum excitation you can give it. So therefore, it has a limited amplitude of its excitation, unlike a harmonic oscillator. And at this point you may say, but maybe somewhat similar or analogous to a two-level system. But the surprising result is-- at least it was surprising when I first learned about it. That it's not just somewhat analogous to a two-level system, it actually captures exactly a lot of the properties of the dynamics of a two-level system. So let me write that down. The motion of classical magnetic moments provides a model. It's actually an exact model which captures essentially all features of the quantum mechanical two-level system. I want to show you today and the next lecture the concepts or Rabi frequency, of generalized of resonant Rabi frequency, all of that you find in the classical motion of a magnetic moment. Or, for instance, the physics of rapid [INAUDIBLE] following, [INAUDIBLE]. A lot of physics we would usually associate with the quantum system, we find it here in a purely classical system. What aspects of the two-level system will we not find? Any ideas? Will? AUDIENCE: Spontaneous emission? PROFESSOR: Spontaneous emission, definitely. Yes. But actually, in a two-level system, in a quantum-mechanical two-level system, which we drive with a single frequency, spontaneous emission is also missing. Spontaneous emission, as we will discuss later, only comes into play when we say the excited state of the system interacts with many, many modes and not just the one mode we apply. And typically, if you go to high frequency, we have an optical oscillator, we cannot avoid spontaneous emission. Whereas, for a quantum-mechanical spin 1/2 interacting with microwaves, we can completely eliminate spontaneous emission. So spontaneous emission, I would say, comes into play at high frequency. So that's correct. But there is one aspect even at low frequency, one aspect of quantum mechanics which we cannot capture. AUDIENCE: Having different G in a magnetic field? PROFESSOR: Different G factors, yes. That's, as we see, a more quantitative aspect. But there is one very important feature about quantum mechanics you will never get in a classical system. AUDIENCE: Spin? PROFESSOR: Spin. AUDIENCE: [INAUDIBLE]. AUDIENCE: [INAUDIBLE]. PROFESSOR: OK. I think you're skirting around. It's a quantum measurement process and projection. If you perform a measurement on a compass needle, it can be at any angle. But if you do a measurement on a quantum system, you do a projection. After the measurement projects a system in either spin up or spin down. So the probabilistic nature, the projection occurring in a quantum measurement is, of course, absent in a classical system. But when you say spin 1/2 and quantum levels, this is sort of implied in it. If there are only two levels, there's only up and down and not an infinite number of angles. So what we will actually see is that we find an exact analogy between the classic system and the quantum mechanical system when we compare expectation values. But the individual measurement, the individual quantum measurement because it is projective is different. OK, with that motivation, we are now talking about magnetic resonance. And we will later do a fully quantum mechanical description, but to get the concepts and also understand the analogies to a classical system, we want to understand what happens when we have a classic magnetic moment in magnetic fields. And that includes static fields, but then we want to excite the system. We want to drive the system, and this is time-varying fields. And we will assume that the fields are spatially uniform. So let me just remind you of the obvious equations of motion that also allows me to introduce the nomenclature. The interaction energy between a classical magnetic moment mu and the magnetic field is mu dot B. The force is the gradient of the interaction energy, but it is 0 for uniform fields. So therefore, we don't need to look at the force. But the next thing which we then have to consider is the torque. And when we think about the classical magnetic moment, you can think about a compass needle. But magnetic materials are complicated. If I think about the simplest magnetic moment, I think about a loop of current I and area A. And that's sort of the classical model for magnetic moment. So we have a magnetic moment mu. And if we now add a magnetic field, which is at an angle, we have a torque. But just to make sure the torque is something which is nothing else than the Lewin's force on the electrons. But since electron is forced to go in a circle, we don't have to look at the Lewin's force microscopically. We just immediately jump to the torque. And the torque is what describes the dynamics of the system. So we have torque. When we have torque, we want to formulate the problem in terms of angular momentum. And our equation of motion is the classical equation of motion that the derivative of angular momentum is given by that. Now, what makes those equations immediately solvable-- and to find the very easy limit is that the magnetic moment of the system we assume is proportional to its angular momentum. Well, if you have a mechanical object which goes in a-- if you have a charged object which circles around a central potential, then you, of course, find immediately that if it moves faster, it has more angular momentum. It has a larger magnetic moment. So we use that as the defining equation for what is called the gyromagnetic ratio. Which, of course, is very closely related to G factors, which we define later on for atoms. The gyromagnetic ratio is the ratio between magnetic moment and angular momentum. And then, we find that the derivative of angular momentum is given by this equation. And this is now an equation which you have seen in classical mechanics and in many situations. The solution of that is a pure precession. The motion is pure precession of the angular momentum around the axis of the magnetic field. So in other words, we have the axis of the magnetic field. We have the angular momentum. And at a constant tipping angle, we have the tip of the angular momentum precesses around the magnetic field. And the precession happens with an angular frequency which is called the Larmor frequency. The Larmor frequency, the frequency of precession, is proportional to the magnetic field and the gyromagnetic ratio. So let me give you an example for an electron. The gyromagnetic ratio is 2 pi times 2.8 megahertz per Gauss. And we've discussed last class what it means when I take out 2 pi. Because the Larmor frequency is an angular frequency. And angular frequency is not measured in Hertz because there is a 2 pi factor. And I just make it obvious where the 2 pi factor is hidden. Now, this is for the electron. But if you have an ensemble of classical charges, an [INAUDIBLE] distribution of classical charges-- well, with the same charge to mass ratio, you find that the gyromagnetic ratio is 1/2 of that. And this here is the Bohr magneton, which we will use quite often in this course. The third example is the proton. The proton is heavier, has a heavier mass. About 1,000 times heavier than the electron. And therefore, the Larmor frequency is not megahertz per Gauss, it is kilohertz per Gauss. Any questions? These are more definitions and setting the stage. Let me make a note. It's one of the many notes I will make in this course about factors of 2. There are factors of 2. If you miss it, you qualitatively miss the physics. And let me in that context by talking about precession frequencies and magnetic moment, explain a factor of 2 which is related to the G factor of the electron. So if you have the electron which has spin of 1/2, in units of the Bohr magneton, what is the magnetic moment of the electron? 1/2, 1, or 2? What is the magnetic moment of the electron? 1/2, 1, or 2? AUDIENCE: 2. AUDIENCE: [INAUDIBLE]. PROFESSOR: I should have a clicker question on there. No. It's 1. 1 Bohr magneton. And let me sort of show the level structure of it. This is [INAUDIBLE] energy. You have spin up and you have spin down. The difference is 2.8 megahertz per Gauss. And if you ask, what is the precession frequency of an electron in a magnetic field? It's 2.8 megahertz if the field is 1 Gauss. And if you want to drive the rotation, if you want to change the precession angle-- we'll talk about that in great detail-- you better drive the system at 2.8 megahertz. But 2.8 megahertz is the difference of plus 1.4 and minus 1.4. And therefore, the energy of the electron in a magnetic field is either plus or minus 1.4 megahertz per Gauss, and 1.4 is 1 Bohr magneton. The magnetic moment of the electron is 1 Bohr magneton. So precesses at 2.8. OK, but let us contrast this with a classical current distribution, which has 1 unit of h bar, which means the magnetic moment is 1 Bohr magneton exactly as the electron has. Well, quantum mechanically means it has three different level-- minus 1, [INAUDIBLE] 0, and 1 because it has 1 unit of angular momentum. Since the system has 1 Bohr magneton, when the system stands up or stands down, the difference between spin up and spin down is 2.8 megahertz per Gauss. OK, my question now is, what is the precession frequency of this classical charge distribution which has 1 unit h bar of angular momentum? And I've just shown you the level structure. If you will now create a wave packet of those three levels, which means-- a wave packet of the three levels means you have a spin which points in one direction. What is the precession frequency of that system? Let's have a clicker question. So what is the precession frequency? Oops, what happened? So let me give you three choices. 2.4, 1.4, or 0.0 megahertz per Gauss. So please vote for A, B, or C. Yes, it's 1.4. So they have the same magnetic moment. Quantum mechanically, you would see here is a G factor of 2. Here's a G factor of 1. But the easiest explanation is it precesses. The precession is a beat note between two energy levels. Here, you have the beat note between those two energy levels happens at 1.4 megahertz. Therefore, when you want to drive it with an external radiation, you have to drive it at 1.4 megahertz. You want to drive it level by level. Whereas, this system has a beat note between two levels, and the difference is 2.8 megahertz. Anyway, whenever you get confused about factors of 2 with magnetic moment and precessing system, just think about those two simple examples. They have all the factors of 2 hidden in the simplest example possible. All right. We have a rotating system. have a system which precesses. So we want to learn about rotations in general. And what I want to show you is that under very general circumstances, we can solve the equation, the dynamics of the system, by going into rotating frame. You all know about rotating wave [INAUDIBLE] rotating wave approximations that's in quantum physics. I'm simply talking about a classical system and I want to solve the equations for the classical system by going to rotating frame. And I want to show you where this is exact and where not. OK, so this is actually something which we do in undergraduate, in 8.01-- definitely, in 8.012. But let me remind you when we have a rotating vector which rotates with a constant angular frequency, then the time derivative of the vector is the cross product. But now we want to allow-- so this is when the vector is constant and it just rotates. But now, we want to assume that there is something else. There is an arbitrary time dependence of the vector in the rotating frame. So we have a vector which changes according to A dot [INAUDIBLE]. But it also rotates. And that means-- and this is exactly shown in classical mechanics that in the inertia frame, the time derivative is the sum of the two. It is the change of the vector in the rotating frame plus omega cross A. So it has this equation. Has the simple two limiting cases that if there is no change in the rotating frame, then we retrieve the kinematics of pure rotation. When our rotating frame is not rotating or it rotates at 0 angular frequency, then of course the two time derivatives are the same. But anyway, what I derived for you is an operator equation that the time derivative in the rotating frame is related to the time derivative in the inertia frame in this way. And now we want to apply it to our angular momentum L dot. So this is just applying the operator equation to our angular momentum L. And now we want to specialize that-- we just discussed that the time [INAUDIBLE]. I'm just looking for a sign problem, but it's sometimes hard to fix sign problems at the board. The creation of motion for the angular momentum was that it's L cross gamma B. Oh, I changed the order. There is no sign problem. And then, I add this. So if we now describe our precessing, classical magnetic moment, which has the equation of motion that L dot is L cross gamma B, if you describe it in a rotating frame, then the equation of motion gets modified as follows. Now, what happens is we-- let me factor out the gamma. Gamma L cross B is the real field. So what we observe is that when we go into a rotating frame-- and this is exact-- that the real magnetic field gets replaced by an effective magnetic field. Because there is an extra term added to it, which we can call a fictitious magnetic field. So this is just an exact transformation of our equation of motion for a precessing system into the rotating frame. And now, of course, we haven't made any assumptions what the rotating frequency is. But if you would choose the rotating frequency to be the Larmor frequency minus gamma times B, then our effective magnetic field vanishes. And then we know, because there is no magnetic field, that the angular momentum is constant in the rotating frame. In other words, the dynamics of the system means that L is constant in a rotating frame. And if you want to know what happens in the original, in the [? lab ?] frame, in the inertial frame, we just have to rotate back. OK, that's something we want to take advantage of. But we fully apply it in the next class. I have only a few minutes left today, and I want to spend those few minutes to talk about another factor of 2. Now, let me ask you the following. If you have an electron in a magnetic field, well, you know that the electron goes in circles. It's the cyclotron motion of the electron. Now, just give me one second. I forgot to mention something for the classical system. For a classical charge distribution, the Larmor frequency is the charge of the particle. In case of the electron, it's e. Divided by 2m times B. So the Larmor frequency is e over 2m times B. So therefore, we know that when we have an [INAUDIBLE] of positive and negative charges, and there is an effective magnetic moment, that this magnetic moment would precess at the Larmor frequency which is given by this expression. So who knows what the frequency of the cyclotron motion is? So when we have a free electron, what is the frequency at which it revolves? At which it goes in circles? AUDIENCE: 2 times the Larmor frequency. PROFESSOR: It's two times the Larmor frequency. OK, I just wanted to mention it. There's an important factor of 2 which you should know about. In previous classes, I spent 10 or 20 minutes to teach you about a few of them which is called Larmor's theorem. But I summarized the argument on the atomic physics wiki and I can't say more here in class than I've written on the wiki. So please read on our atomic physics wiki about Larmor's theorem. Larmor's theorem shows you that under certain assumptions, you can transform away the effect of a magnetic field by going to the Larmor frequency. That looks exactly like what we have discussed here. But what we discussed here was exact. There was no approximation. Whereas, the derivation of Larmor's theorem, which talks about charge distributions-- not about magnetic moments, about charge distributions-- has to make certain approximations. So just want to point your attention to that there are two derivations about Larmor frequency. One is exact, which I gave to you. There is another one which is Larmor's theorem, which applies to isolated charges which is not exact. But they both conclude that you can transform away the effect of a magnetic field by going to rotating frame at the Larmor frequency. And the fact that Larmor's theorem is not exact is actually illustrated by this example of a free electron where you have a factor of 2. And this comes because the term which you neglect when you derive Larmor's theorem is negligible if the situation is that you have electrons and charges forming magnetic moments. But if you have a free electron, the neglected term is exactly 1/2 of the dominant term. And that is why the cyclotron frequency is twice about Larmor frequency. So never confuse the cyclotron and the Larmor frequency. And the factor of 2 is not related to a G factor of the electron or such. It's really the difference between the physics of a free charge and the physics of a magnetic moment. Any questions? OK, well, then we are finished for today. A reminder, no class on Wednesday, but we have class on Friday in the different classroom.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
24_Coherence_IV.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good afternoon. So in the last week of this semester, we will be finishing up the chapter on coherence. What we want to continue to explore today is the presence of a dark state. If you have a three level system, and that's what we went through last week, we will always find one state which is dark, which means we fully illuminate the atom with laser light, but there is one state which is not coupled to the excited state which does not scatter light, and therefore is dark. I showed you that for very general conditions, if you have two laser beams exciting the two states, there will always be a coherent superposition of the two states, which is the dark state. The dark state is the novel feature of three level systems, and I want to show you different perspectives of it. We started out by talking about just the existence of the dark state. Then we talked about dark state transfer. You can regard, for detuned light, the dark state as the lowest energy state of the system. And then an adiabatic theorem tells you that you can keep particles in the dark state even if you change the laser parameters and change what state is now the dark state. This is the basis of coherent population transfer, or the famous STIRAP method. There is another aspect of the dark state which gives us a possibility of lasing without inversion. As I reminded you, lasing has a threshold, which is inversion that you need more atoms in the excited state than in the ground state because the atoms in the ground state absorb the laser light and you want a net gain. However, if you have a dark state, you have a situation where the atoms do not absorb the laser light, and therefore, the conditions for net gain have changed. And indeed, lasing without inversion becomes possible. I started to explain to you the way how lasing without inversion could come about. It relies on the fact that in a three level system, absorption destructively interferes but stimulated emission does not destructively interfere. And therefore, you can have lasing without inversion in such three level systems. I showed you one possible realization, and this is about hydrogen in a DC electric field. If we mix the 2S and 2P states with an electric field, we have this three level structure. And for a laser tuned right in the middle between the two states, the two amplitudes for excitation cancel exactly, and therefore, you have a zero absorption feature. However, if you put now a little bit of population into, let's say, the upper state, this upper state in the wings of its profile has still gained for stimulated emission, and what we get is lasing without inversion. Now you say, OK, but that's a hydrogen atom. Which atom is really degenerate between two levels, and you can split regeneracy with a static electric field. Well, you're already an expert at this point in the course. If the 2S and 2P state are widely separated, well, you add a photon, and the photon, which is in resonance with the 2S and 2P state, creates, in the dressed atom picture, degeneracy. Maybe I should have shown the P state higher. The 2S state with one more photon and the 2P state with one photon less have the same energy, and then you create exactly this situation. So therefore, the way you can realize that in atoms other than hydrogen is use an AC electric field to mix S and P states. And I'll show you in five minutes a little bit more in detail what I mean by that. There is a trivial realization which I want to mention for lasing without inversion, and this would be if you have a three level system with an excited state, two levels, g and f in the ground state. You may have an inversion for the e to g transition. And therefore, you can get lasing because the population in this state f is not coupled. This is pretty trivial, but the more subtle part, of course, is that we can realize it using a driven system using a control laser by creating the same situation with the bright and the dark state, and population in the dark state is hidden from the light and does not absorb light. Let me just indicate that in both those states, we would have no absorption, dark state. Those examples may raise the question whether whenever you have lasing without inversion, if you can find a basis where you have inversion again between the two levels which are relevant and the extra population is just hidden. I want to make two comments about it. This question is sometimes discussed in the literature, sometimes in a semi-controversial way. There are two comments about it. One is when you start dressing up your system with laser beams, you have strong control lasers. You have two lasers, omega 1, omega 2. One is often a strong control laser and the other one is the weak laser where we want to have lasing. You have actually a time dependent system driven by time dependent fields, and once you have a time dependent system, it's no longer clear what the eigenstates are, what the populations are, and what are the coherences. There is no longer a unique way to distinguish what are the eigenstates because every state is, so to speak, time dependent almost by definition. On the other hand, I think the example I gave you with atomic hydrogen where you just a little bit of mixing with an electric field is an example where you genuinely have less population in the excited state than you have in the ground state, and at least the equation tells you, even without inversion, you have a net gain. So my own understanding of that situation is that in many situations, you can actually reduce it to a simple picture where you have simply hidden population in a dark state, but without any sort of unnatural definitions, you may not find that in some other systems. Questions about lasing without inversion? Nancy? STUDENT: Is lasing without inversion important in lab, as opposed to lasing with inversion, or is it more about teaching us [INAUDIBLE]?? PROFESSOR: Well, lasing without inversion has definitely been touted as a way to get lasing deeper in the UV, to get lasing for very blue transitions because when you want to create inversion-- this actually has been the problem in creating x-ray lasers in atomic systems. If you have larger and larger energy separation, spontaneous emission scales with omega [? cube. ?] And so therefore, it becomes harder and harder to fulfill the ordinary gain equation. STUDENT: So in those cases, even these last two methods [INAUDIBLE] because [INAUDIBLE]?? PROFESSOR: Well, lasing without inversion alleviates the requirement to build a laser, and so people have discussed that where it's really hard to create a laser in the conventional way, deep, deep in the blue of the UV or in the x-ray regime, that lasing without inversion may help. I'm not aware that any practical development has emerged from that because there is a price to be paid, and that is usually in the form of coherence. You need a certain degree of coherence in your system to be able to do that. It's an idea which is powerful, but as far as I know, there was no killer application of it. The importance of dark states is definitely in slow light, manipulation, quantum computation, and concepts of storing light, and this is what we want to discuss next. Actually not next, but after next. What I first want to discuss is another aspect of the dark state, and this goes by the name EIT, electromagnetically induced transparency. I can introduce this topic by a question to Radio Yerevan, is it possible to send a laser beam through a brick wall? And the answer of Radio Yerevan, if you know the joke, is always, in principle yes, but you need another very powerful laser. So in an incoherent way, of course, a very powerful laser can drill a hole into the wall and then the next laser can go through the wall without absorption, but you can be smarter. If the very powerful laser through coherence puts all the atoms in the brick in the beam pass of the laser into a coherent superposition state, and then they become a dark state, then your laser can go through a brick wall. So can a laser beam penetrate an optically thick medium? And the answer is yes, with the help of another laser. Original ideas along those lines were formulated by Steve Harris, who has really pioneered this field, and he first considered special auto-ionizing excited states which couple to-- there were two pathways of coupling into the continuum, but later work has shown that it can be realized in a lambda system. Let me talk about this conceptionally simpler realization in a three level lambda system. Let's assume we have again our normal three level system gf and an excited state which has a width, gamma. And we want to send a probe laser through a dense medium, and it would be completely absorbed by the resonance to the excited state. But now we can have a strong coupling laser with a rapid frequency, omega c, and so if we drive the system very strongly, we can create a situation where the coupling laser does, if it's strong enough, complete mixing between the excited state and the ground state, and that means if you have two levels which are completely mixed, they are split by the energy or the frequency of the coupling. So in other words, what we obtain is we have now two states, e plus f and e minus f. You know the how you should read is the following. You can just assume for a second the state f will degenerate with gamma and then I put in a very strong mix in between the two. This is exactly the example we had with hydrogen in an electric field, and then we get two states which have both width gamma over 2. They are strongly mixed, and the splitting is nothing else than the matrix element of the electric field. But if you don't have two degenerate states and we add a photon, then the photon-- I mentioned it again and again in addressed atom picture-- creates a degeneracy. You can just add a photon, draw a dashed line, and this is your virtual state if you want. Or if you look at Schrodinger's equation, you have something which oscillates at the frequency of the state, f, but now you multiply it with an electric field which oscillates at the resonance frequency, and then you have something which oscillates at the sum of the two frequencies, and this is exactly what I indicated with a dashed line. So that's how you create, so to speak, a degeneracy by using the frequency of the laser to overcome the energy splitting, and the result is that you have created exactly this excited state level structure. And if you now look at the ground state, and our photon is tuned right between those two continua, then we have a dark resonance. And in order to accomplish this, I'm not going into any calculations here, but you need a sufficiently strong coupling laser who can accomplish that. For instance, coupling laser which is stronger than the spontaneous emission rate or decoherence rate gamma in the excited state. So if we now scan the probe laser and we look for transmission, let me assume for simplicity that there is no relaxation between the two ground states. If the coupling laser strength is 0, then we have a broad feature, which is simply the single photon absorption of the probe light. If we have an infinitesimal coupling laser strength, we get a very, very sharp feature, and if the coupling laser is stronger and stronger, we get a window of transparency, and the width is given. The width delta omega of this central feature is given by the Rabi frequency of the coupling laser. So what are possible applications? One is you can design non-linear materials. Usually, when you want to have frequency conversion processes or optical switches at one laser beam affects another laser beam, you want a very strong non-linear response of your medium, you usually get that if you go near resonance, but near resonance, you have strong absorption. But now, using the concept of EIT, you can have both. You can have the strong non-linear response of your medium near resonance but you suppress absorption, you get a window of transparency using EIT. So you can have near resonant materials without absorption. Again, in principle, I'm not aware that this has really taken off in a bigger time. One would be if you want to do very sensitive spectroscopy. Assume you want to measure one isotope which has a tiny little abundance but you have to observe it against the background of a very strong isotope. If you could switch off the absorption of the background, the absorption of the strong isotope, you could still see a small amount of a trace isotope in the presence of a strong absorption line. So sensitive detection of trace elements. Questions about that? I've given you qualitative pictures for lasing without inversion and for EIT. I want to give it a little bit more quantitative touch, not by going through to the optical Bloch equations which would be necessary to describe all features of it, but at least I want to give you one picture where you can derive and discuss things in a more quantitative way, and this is the eigenstate picture. I've also done here what I've said several times, that when we have splittings between the levels, we can actually focus on what really matters, namely the detunings, by absorbing the laser frequency into the definition of our levels, and that's what I've done here. Instead of using levels g, f, and e, I have levels with g, f, e, but this is the photon number in laser field one and the photon number in laser field two. So if all of the photons, the laser field one and laser field two, would be resonance with their respective transitions, then all those three levels would be degenerate. But now in the three level system, they're not degenerate only because we have a relative detuning delta, detuning small delta from the Raman resonance, and we have a detuning peak delta, which is sort the common detuning of the Raman laser from the excited state levels. So therefore, if I did define the Rabi frequencies, the Rabi frequencies over 2 are gain coefficient, and the Rabi frequencies are proportional to the electric field. Therefore, they scale with the photon numbers n and m in the two laser beams. If I do that, I have a really very simple Hamiltonian. On the diagonal, we simply have the detuning of both laser beams form the excited state. Here we have the Raman detuning, and we have two couplings to the excited state. One is laser field one, g1, with n photons, and the other one is laser field g2 with m photons. Any questions about? Just setting up the simple equations which we have done a few times. Let me focus first on the simple case that everything is on resonance. Then, if everything is on resonance, we have the structure which I've shown here. You can sort of say you have three levels, which are degenerate without any Rabi frequency, because the detunings are all zero, or all the diagonal is zero. And then the off diagonal matrix elements, the Rabi frequencies, are just spreading the three levels apart, and the general structure of this matrix is that in the middle, you always have a state which is just a superposition of g and f. So it's a dark state. It has no contribution in the excited state. And the two outer states have equal contribution of the excited state. So the excited state has been distributed over the two outer states, and the widths of those levels is therefore gamma over 2. And we know when we had two levels and we were driving them with a Rabi frequency resonantly, we had splittings which were just given by the Rabi frequency, and now the splitting between the outer level and the dark state is the quadrature sum of the two Rabi frequencies. PROFESSOR: So that's a very general structure. I want to go back to the situation where one laser is a probe laser. Let's assume this is our laser beam which wants to go through the brick wall, and the other laser has to prepare the system. Let me discuss the limit where the photon number becomes very small in laser one, the photon number n goes to unity, and this is much smaller than m. So we have the situation of a weak probe field. If we have this limit, then the dark state has much, much more amplitude in state g, and the state g is almost decoupled. It's in a trivial way. The dark state and the laser beam is very strongly mixing the state f and the excited state. So in this limit, we have a nice physical situation. I should actually point out that you can solve most of those situations for three level systems analytically. It's just those explanations get long and are not very transparent. So what I'm trying here in the classroom, I try to pick certain examples-- weak probe field, resonance-- where we can easily understand the new features which happen in the system. So the situation we have now prepared is we have our dark state, which is level g, we said we have only one photon in our probe beam, and we have lots and lots of photons in the coupling laser which couples the other ground state, f, to the excited state, e. So we have the structure that we have now two states which have half of the widths of the excited state, and they are both bright. We can call one the bright state 1 and the other the bright state 2. The splitting is related to the Rabi frequency. Let me just call it delta bar. This is now the level structure which we have, and what I want to now emphasize in this picture is the phenomenon of interference. We've talked about interference of amplitudes, but now I want to take the system and show you how we get now interference when we send one probe photon through the system. What we can now formulate is a scattering problem. We have one photon in our probe beam in a special mode, but then, all the other modes are unoccupied. And when we are asking, does the probe photon get absorbed? Does the weak laser beam get stuck in the brick wall? We are actually are asking if it is possible that we scatter the photon out of the mode and it gets absorbed. But you know absorption is actually always a two photon process involves spontaneous emission, and we have emitted the photon into another mode. The scattering problem is a two photon process, and the matrix element needs an intermediate state, but now we have two. We start with one photon, we have the light atom coupling, we can go through bright state 1. From bright state 1, we have the light atom coupling again, and eventually, we go back to the ground state without four photon. And here we have a detuning. Let's assume we are halfway detuned between the two bright states. And then we have a second amplitude, and it's indistinguishable. We have a Feynman double slit experiment, and everything here is the same except that we scatter through bright state 2. This matrix element, when it vanishes, this is now the condition of electromagnetically induced transparency. But we want to now understand what happens when we detune the probe laser. So we have set up the system with a strong control laser, we have completely mixed the ground state f with the excited state, and now we want to ask, can a weak probe laser go through the brick wall How much of the probe laser is absorbed? So what we want to understand now, and this is a new feature I want to discuss now, what happens when we detune the probe laser by delta? Well, it's clear, and I just wanted to show you the formula. We had two detunings here, and if we detune the probe laser by delta, that would mean now that those denominators are no longer opposite but equal. In one case, we add delta. In the other case, we subtract delta. And therefore, we have no longer the cancellation of the two amplitudes and we have the scattering of the photon. This is sort of the framework, and I want to show you now several examples. I want to show you examples of a probe absorption spectra. It's more a little bit of show and tell. I want to show you the result if you would work that out. It's pretty interesting. So first, I want to discuss the case where the coupling laser is really in resonance. We are near the one photon resonance. Let me assume first that the Rabi frequency of the coupling laser is much, much larger than gamma. Then if you look at the probe transmission, we have the situation that we know when we are right here in the middle, we have a window of transparency, but if we detune by the Rabi frequency over two, we hit the bright state 1, and if we detune in the opposite, we hit the bright state 2. And the splitting in this situation, delta bar, is nothing else than the Rabi frequency omega 2. So what we have is we have the detuning in between, and we know already here is our special point, and that's what we have discussed for long, electromagnetically induced transparency. The two bright states are at plus and minus half the Rabi frequency of the coupling laser. We know that we have very strong absorption here. What you get is sort of broad feature. But this was a situation when we drive the system very strongly. I now want to discuss the case that the coupling laser is much weaker than gamma. Then the splitting between the two bright states was the Rabi frequency, but the width is gamma, so then the two bright states pretty much merge into one continuum feature of width gamma. For that situation, we have a broad feature of absorption, which is on the order of gamma, or if you have an opaque medium, of course, you put the absorption coefficient into the exponent of an exponential function and you get a blackout which is wider than gamma. But then we have our phenomenon of EIT, but the width of this feature is now much smaller than gamma. So what we have here in those two situations where the lasers are close to resonance with the excited level, we have the situation that the strong absorption feature due to the bright states, either one strong feature of width gamma or two features here, those broad, similar photon absorption features, they really overlap with our window of transparency. And what I find very insightful is now to discuss the situation where we separate the two, and I want to show you what happens. It gives a very interesting profile. So what we want to discuss now are the famous Fano profiles, and I want to discuss two photon absorption features. We have discussed the case where the one photon detuning. We want to go now to a large one photon detuning delta 2, and the new feature now is that this will separate the window of electromagnetically induced transparency from the broad absorption features. Let me just draw a diagram of the states. As usual, we have our two states in a lambda transition, g and f. We have this continuum of the excited state, but now, and this is what often is done in the experiment, you're not using Raman lasers which are in resonance with the excited state. You're using Raman lasers-- here is the strong coupling lasers-- which are far detuned. So here we have a detuning for laser two called gamma 2. The Rabi frequency is omega 2. Here we have a weak probe laser omega 1, and the detuning. Let's call it gamma 1. So in order to keep the situation simple, we use a weak probe laser. Omega 1 is small. And our Raman detuning delta is the difference between the two single photons detuning, capital delta 2 minus delta 1. For these situations, there are nice analytic expressions, and together with the class notes, I will post a wonderfully clear paper by [? Lunis ?] and [INAUDIBLE] where the two authors discuss this situation. Let's just figure out what are the features in the system. Let's just go through different situations. One is if the Raman detuning is zero, we should always get the phenomenon of electromagnetically induced transparency. If we don't have any coupling, then we simply have a two level system, and if we tune omega 1 into resonance, we get simple single photon absorption. So if we look at the system of the three levels and we are asking what are now the relevant processes, one limit is, of course, the trivial limit that we have single photon absorption. This is, of course, a trivial case. Let's now go to the more interesting case that we have a coupling laser. What happens now? We have our two states. The excited state is coupled with the laser and Rabi frequency omega 2. Now we know that the laser omega 2, if it is not on resonance, will give us an AC Stark shift. This AC Stark shift, we've gone through that several times. It's the matrix element or Rabi frequency squared divided by the detuning. If we now bring in the probe laser, what are the features we expect? Well, there are two features. One is we've just discussed the trivial case above. If we tune the probe laser into resonance with the excited state, we have single photon absorption. We get a broad feature. It's almost like in the two level system. But in contrast to the case I just discussed above, the excited state level e has now an AC Stark shift so the resonance is shifted. That's now becoming a four photon process because we need two photons going up and down with a coupling laser to create the AC Stark shift. And now we have a laser from the probe beam and the photon is scattered, so it's a four photon process and it will give us a broad resonance which is now AC stack shifted. But in addition, we have a resonance, which is the Raman resonance. When the Raman detuning is zero, then we absorb from the probe laser and we emit in a stimulated way with the coupling laser, and we have a stimulated two photon transition. Now, what is the width of this stimulated two photon transition? Well, we go from a stable ground state to-- I wanted to say another stable ground state, but this other stable ground state is now scattering photons. So because of the presence of a strong coupling laser, you have broadened this level f by photon scattering. You interrupt the coherent time evolution by scattering photons, and the photon scattering happens in perturbation theory by the amplitude to be in the excited state squared times gamma. The scattering rate, gamma scattering, is Rabi frequency divided by detuning. This is the amplitude to be in the excited state. We square that, and then we multiply with gamma, and if I can trust my notes, it's a factor of two, which I don't want to discuss further. This is a quantitative argument. The analytic expressions are in the reference I've given to you. So the situation which we have right now can be in a very powerful way summarized as follows. We have our ground state, we have two continua we can couple to. We can couple to the excited state which has a width, gamma, through a single photon, but there is the AC Stark shift. Or we can couple through a two photon Raman transition to the state f, but the width is much, much smaller because it is only the scattering rate due to the off resonant coupling laser. Let me just write that down because I said a lot of things. So g couples now. When we detune the probe laser, we can be in resonance with this feature and we can be in resonance with that feature to a narrow and wide excited state. One excited state, of course, is the state f, but the coupling laser puts some character of the excited states into the state f. And the most important thing now is the following. This is the theme I've emphasized again and again when we discussed three level systems. Those two states have a width, and the width means they spontaneously emit light, but they emit light into the same continuum. So if you start in the ground state, your probe laser has a photon and the photon gets scattered. You do not know through which channel it has been scattered. So in general, and this is as far as I want to push it in this class, in three level system, we have now the interference between those two continua. One is narrow and one is wide. Let me just write that down because this is important, but both excited states couple to the same continuum. And by continuum, I mean the vacuum of all empty states where photons can be scattered, and this is the condition for interference. And this is, of course, what gives rise to electromagnetically induced transparency. I'll take your questions, but let me first give you a drawing which may illustrate, or summarize, what I've just said. So what we try to understand is what happens when we detune the probe laser. Until now, we had always the EIT feature, the EIT window, was completely overlapping with the one photon resonance, but now, because the coupling laser has a detuning of gamma 2, we have to use with the probe laser. I have to trace back how the detunings are defined. The Raman detuning was big delta 2 minus big delta 1. If the detuning delta is chosen to be delta 2, we have the simple situation that we go from the ground state right to the excited state, and we have the feature which has a width of gamma. This is what you would call the single photon resonance. It is the single photon resonance. It is due by resonantly coupling into this continuum, and the only feature of the coupling laser is that there's an AC Stark shift to it. Single normal resonance, almost like in a two level system, but the only addition is the AC Stark effect. Now we have a second feature, which can be very sharp. This is when we do the two photon Raman process into the other ground state. Due to photon scattering, this resonance has a width of gamma scattering, and the position is not at zero at the naked Raman resonance. It also has an AC Stark shift because the coupling laser does an AC Stark shift to both the excited and the ground state. The coupling laser, the AC Stark shift, pushes ground and excited state in opposite directions. So therefore, we find that at the Rabi frequency of the coupling laser divided by gamma over 2. The name of this feature is the two photon Raman resonance plus the AC Stark shift, which actually means it's a four photon process. The question is now, where is the electromagnetically induced transparency? We have introduced now specifically the absorption feature. We have identified a one photon absorption feature plus AC Stark shift, a two photon absorption feature plus AC Stark shift, but where is electromagnetically introduced transparency? Here. Electromagnetically induced transparency is always at delta equals 0. You have to fulfill the Raman resonance, and this resonance is not affected by any AC Stark shift. It's always at delta equals 0. So therefore, what that means is you have two absorption features, and you would think these are two Lorentzians, but they interfere and they go to exactly 0 at delta equals 0, and this is our EIT feature. So there is an interference effect between a broad feature and a narrow feature, and this is found in many different parts of spectroscopy, but you also find it in nuclear physics whenever you have a narrow feature embedded into continuum. What we have here is a narrow feature and a broader feature, but a narrow feature and something which is broader or continuum is called a Fano resonance. You actually have the same situation when you look at scattering of atoms. Many of you are familiar with Feshbach resonances. Well, a lot of people call it Fano Feshbach resonance, and what you have in a Fano Feshbach resonance is two atoms can scatter off each other. This is the continuum. This would be your broad feature. But then they can also scatter through a molecular state, and this is a narrow feature. And what we've identified now for electromagnetically induced transparency are the two features. One is the single photon absorption, one is the Ramon resonance. But in general, the concept is much more general. You have a narrow feature, you have a broader feature. It's responsible for scattering two atoms or it's responsible for scattering a photon, and once the photon or the atoms have been scattered, you have no way of telling which intermediate state was involved. And therefore, you get interference. And what I just said, what is EIT for light is the zero crossing of a scattering length that the atoms do not scatter off each other because the two different processes completely destructively interfere. So what I've shown here is two Lorentzian, two absorption features. Let me know re-plot it and plot the index of refraction minus 1. Here was EIT, zero detuning. We have a sharp feature here at the two photon resonance, we have a broad feature here at the single photon resonance, and if I now transform the Lorentzian into a dispersive feature. I use freehand, so this is a dispersive feature for the broad transition. The narrow transition has a much, much sharper dispersive feature, and the important part is now at the EIT, at the detuning delta is zero where we have electromagnetically induced transparency, we have an index of refraction which is exactly one because it's a dark state, you have no absorption, you have no light scattering, you have no reaction to the light. And therefore, the index of refraction of the material is like the index of refraction of the vacuum. So you have n equals 1. It looks like a vacuum in terms of index of refraction. It looks like a vacuum because you have no absorption. But what you have is you have a large derivative of the index of refraction with the frequency detuning, and that affects, and that's what I want to tell you now, the group velocity of light. So anyway, this is maybe as far as I want to push it, and I was actually wondering if this is a little bit too complicated to present in class. But on the other hand, I think it sort of also wraps up the course. We have a three level system, and we find a lot of things we have studied separately before-- two photon Raman feature, single photon light scattering-- but now they act together and they interfere and have this additional feature of electromagnetically induced transparency. Colin? STUDENT: So the way you drew the level diagram, the f, I think, [INAUDIBLE] state [INAUDIBLE]? PROFESSOR: Yes. Actually, I emphasized that we have an AC Stark shift, and what I didn't say when I discussed it here that the AC Stark shift pushes this level down and pushes the other level up. But since we are talking about a very broad resonance in the excited state, for all practical matters, the AC Stark shift doesn't matter, whereas for the narrow Raman resonance, the AC Stark shift is important. STUDENT: Also, you showed on the plot that the shift of the excited state [INAUDIBLE].. PROFESSOR: Sorry. Thanks. So the AC Stark shift for that detuning would shift this level up and would shift this level down. What's the second question? STUDENT: You drew on the plot-- PROFESSOR: This one? STUDENT: That one. That the shift of the excited state was much higher than the shift of the ground state. PROFESSOR: You mean those two shifts? STUDENT: Yeah. PROFESSOR: We have to now do the bookkeeping. We have assumed that the coupling laser has a large detuning, the coupling laser is very far away from resonance. And if you want to hit the excited state, we know we need a Raman resonance, which is delta 2, but the Raman resonance, capital delta 2, means that we are smack on the single photon resonance for the probe laser. So this feature is pretty much we take the ground state and go exactly to the excited state with a single photon. There is an AC Stark shift involved but it's not relevant here, whereas the other feature is the two photon Raman feature. And the one thing I wanted to point out in this context is that there is actually a small energy splitting between the two photon Raman feature by the AC Stark effect, whereas the EIT feature always happens at Raman resonance delta equals 0. Because, just to emphasize that, delta equals 0 is really you induce a coherence between g and f. It's a dark state, and when you have a dark state in that situation, you don't have an AC Stark effect. So the EIT feature is at delta equals 0, whereas the photon scattering features, they suffer, or they experience-- it may not be negative to suffer an AC Stark shift, but they experience an AC Stark shift. Further questions? Yes? STUDENT: Can we somehow relate this to Doppler free spectroscopy? PROFESSOR: Can we relate that to Doppler free spectroscopy? Actually, I don't think so because I would say for the whole discussion here, let's assume we have an atom which has infinite mass, which is not moving at all. We're really talking about internal coherences. However, and that's where it becomes related, if you look at very, very narrow features as a function of detuning then, of course, Doppler shifts play a role, and if you have very, very, very narrow features, you become sensitive to very, very, very small velocities, and therefore, you have an opportunity to cool. So if you can distinguish spectroscopically an atom which moves a tiny little bit and an atom which stands still, if you can, by a narrow line, distinguish the two, then you can actually laser cool this atom. The EIT feature can give you extremely high resolution. I'm not discussing it here. I've discussed the phenomenon of EIT, which is coherent population trapping. But there is an extension, which is VSCPT, Velocity Selective Coherent Population Trapping, and VSCPT was a powerful method to cool atoms below the recoil limit, but I've not connected anything of coherent population trapping with the Doppler shift. So for all this discussion, please assume the atom is not moving. Other questions? Then let me just say a few words about the fact that we have a large derivative of the index of refraction, and this, of course, is used for generating what is called slow light. The group velocity of light is the speed of light, but then it has a denominator which is the derivative of the index of refraction with respect to frequency. Towards the end of the last century, there were predictions that electromagnetically induced transparency would give you very sharp features which can be used for very slow light. And what eventually triggered major developments in the field was this landmark paper by Lene Hau where she used the Bose-Einstein condensate to eliminate all kinds of Doppler broadening. There are other tricks how you can eliminate it, but this was the most powerful way to just take a Bose-Einstein condensate where atoms have no thermal velocity and, in this research, she was able to show that light propagated at the speed of a bicycle. So it was a dramatic reduction of the speed of light, and this showed the true potential of EIT. There have been other demonstrations before where light has been slowed by a factor of 100 or a few hundred, but eventually, combining that with a Doppler-less feature because the atoms don't move in a BEC created a dramatic effect. So we have now two ways how we can get a large derivative, dn d omega. I've discussed here the general case that we have a narrow feature, a broad feature, because we have the coupling light detuned from the excited state, but let me just point out that even if you have the coupling light on resonance, depends what you really want, but you can get an even stronger feature in the index of refraction versus frequency. This is now the situation where we have the strong absorption feature, but then we have the EIT window. So we have this superposition of a positive Lorentzian and a negative Lorentzian, and if I now run it through my Kramers-Kronig calculator and I take the dispersive shape, I can sort of do it for the broad feature in this way and for the narrow feature in this way, and now you have to add up the two. And what you realize is at this point, you have a huge dn d omega. What I'm plotting here is on the left side, the absorption of the Lorentzian, and you can regard this sharp notch as a second Lorentzian. So you have the positive Lorentzian, negative Lorentzian, and then you take the dispersive features and you add them up with the correct sign. So whether you're realizing now for quite a general situation where you have single photon detuning, which I discussed before, or whether you're on the single photon resonance, you can have extremely sharp features. So now you can take it to the next level. You have a light pulse which enters a medium, and now the light pulse slowly moves through the medium. But while the light pulse moves through the medium, you reduce the strength of the coupling laser. What happens? So you do now an adiabatic change of your system. You do an adiabatic change of the control field omega 2 while the probe pulse is in the medium. Well, that means that under idealized assumptions which we've discussed, this feature gets narrower and narrower and narrower. If omega 2 goes to 0, the strength of the control field, this feature becomes infinitesimally narrow. And therefore, this feature becomes infinitely sharp, and that means that the group velocity goes to zero. This is now in the popular press, it's called stopped light or frozen light because the light has come to a standstill. What really happens is the following. We have our coupling laser, omega 2, and we have our probe laser, omega 1. When we do what I just said is that omega 2 goes through zero, then the dark state originally, for very strong omega 2, remember the dark state was g? But now if we let omega 2 go to zero, the dark state will become f. And that means that in a way, every photon in the probe pulse has now pumped an atom from g through two photon Raman process into f. So therefore, what it means to stop light or to freeze light means simply that the photons of the laser have turned into an atomic excitation where the excitation is now the state, f. In other words, you have written the photon. The photon has now put the atom into a different hyperfine state. So if this is done adiabatically, and I can't do full justice in this course, but this means that the light is coherently converted into the atomic-- when I say "coherently" say "into population," I mean all the quantum phases, everything which was in the quantum nature of the light has now been converted, has been written into the state, f. This is often called, because g and f are hyperfine states, this means that you have coherently converted the photon or the electromagnetic wave in the probe beam into a spin wave or a magnon. Anyway, I just want to show you the analogy. The fact that you can put the quantum information of light into an atomic state and back and forth, we've discussed that when we had the situation of cavity QED. We prepared a superposition of ground and excited state and exactly the same quantum state which we had in the atom we later found in the cavity as a superposition of the zero photon and the one photon state. So from those general concepts, it should be clear to you that it is possible to coherently transform a quantum state from light to atoms and back to light. And here you see the different realization. We have a quantum state of the photon in the probe laser, and we can now describe the excitations in the system in a parametrized way. What it means is for the strong probe laser, for the strong coupling laser, the excitation in the system travels as a photon in the field one, but when you reduce the coupling in the coupling laser, the excitation becomes less and less photon-like. It becomes more and more magnon-like, spin wave like. And the moment you reduce the power in the coupling laser to zero, what used to be an excitation in the electromagnetic field has now been termed adiabatically into a spin excitation. Coherence has been written into the hyperfine states or your atoms, g and f. All this is done coherently, and therefore reversibly. You can read out the information by simply time reversing the process. You ramp up again the coupling laser, and that adiabatically turns the spin excitation back into an excitation of the electromagnetic field. Any questions? Well, we have three minutes left, but I'm not getting started with superradiance. We have one more topic left, and this will be the topic on Wednesday, decay superradiance, and this is when we discuss the phenomenon of coherence where we have coherence not only in one atom between two or three levels. We will then discuss on Wednesday if we have coherence between many atoms, and this at the heart of superradiance. OK. See you on Wednesday.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
21_Twophoton_Excitation_II_and_Coherence_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So, good afternoon. Today we want to wrap up our discussion on two-photon processes. And just to repeat my motivation that in almost all cases when you address atoms, you do two photon courses because a photon is scattered. You may think it's absorbed and emitted, but in reality, it is a two photon process and not two single photon processes. So therefore, you should really pay attention. If you have any doubts about some subtleties about how is light absorbed and emitted, the correct answer is always obtained from the two-photon picture. Now, I'm using pre-written slides because we treat two-photon absorption in perturbation theory. And it is exactly the same perturbation theory we've used before. It's just-- there's one difference. Namely, we have two optical fields at frequency omega 1 and omega 2. So for the case of two-photon absorption-- that means both photons are used or stacked up to go up in energy. We derived this result, and this was the end of lecture last week. And what we obtained in perturbation theory for the excited state, it's exactly the same structure you have seen before. But the only difference is we have now four terms because we have combinations of omega 1 and omega 2, or we can take two photons out of the same laser beam. Just sort of a question, just to sort of indicate to you how many terms you would expect when you do the most basic light atom interaction. Just one photon, a plus a dega. How many terms do you get if you write down the Hamiltonian? No approximation without. How many do you get? AUDIENCE: Four. PROFESSOR: Four. You have a plus dega for the electric field, sigma plus plus sigma minus for the atom. And then you have four combinations, two are co-rotating, two are counter-rotating. OK. What we are doing here is second order perturbation theory with two optical fields. If you would not do any rotating wave approximation, how many terms would we get? AUDIENCE: Eight. PROFESSOR: Eight? I think it's multiplicative because we have four processes involving one-- oops. Now I'm getting confused. I wanted to say 16, 4 times 4. But now I would say in the first step, which is to the intermediate level, we have four at frequency one, four at frequency, which makes eight. But then I think in the second step you get eight more. So if we don't make an approximation, we get 64 terms. But they're just all combinations, all combinations of frequencies. Anyway, therefore, I hope you appreciate that I did the rotating wave approximation. I said I'm only interested in the near-resonant terms. And then when we say we want to go up in energy in two steps. We absorb two photos. We don't have any emission of photons. These are all the counter-rotating terms. You have only the absorption of photons, and then we have four possibilities. Yes, I mean, you have a term where two photons are emitted from the ground state. This is sort of now doubly counter-rotatable. We're not going there. It's not adding anything new to it. You know what counter-rotating terms mean. In this chapter on two photons, I'm completely focused on the resonant terms. But since four terms is still too many, I want to just tell you what is special about two photons. I focus now on a situation, and that's the most common situation in the laboratory, where there is a near-resonant intermediate state and that is sort of now filtering out one of the terms. If this intermediate state is resonant with omega 1, then we only want to consider now the process where the first step to the intermediate step is driven by the field e1 and the second step to the final state b is driven by the field e2. So therefore, we have only one term, which dominates out of those four, or dominates out of those 64, which we would have gotten without any approximation. It's one term-- the near-resonant term-- which dominates. So that's what we want to discuss now. So this is the term we want to consider. And it's the same we've always done in lowest order perturbation, second order perturbation theory, in two steps. It's now in two steps. But if you're asking, what is the transition probability? The transition probability, we have to get the probability to be in the excited state. And then we have the usual situation that this term can be written as the sine squared divided by this, and it turns into a delta function times t. And when we divide the probability, the amplitude squared by t, we get a rate. And this is Fermi's Golden Rule. It is exactly the same you have seen probably more than 100 times. So therefore, we have now a transition rate, which is Fermi's Golden Rule. This is the delta function. I called it the function f because I want to discuss the spectral profile a minute. But then-- and this is the only difference to Fermi's Golden Rule with a signal photon-- the relevant matrix element is, because we have two steps, is the product of two matrix elements squared for step one and for step two. And because we have an intermediate step, we have to divide by the energy mismatch by the detuning in the intermediate state. But remember, in the one photon picture, Fermi's Golden Rule is the matrix elements squared times the spectral function. So in this matrix element squared in frequency units, neglecting factors of 2, was the Rabi frequency. So therefore, very naturally, we want to define this as a two-photon Rabi frequency whereas each matrix element here divided by h bar was the single photon Rabi frequency. So therefore, what we obtained for the two photon processes, we have a two-photon Rabi frequency, which is the product of the single photon Rabi frequency for each step, divided by the energy detuning from the immediate state. So therefore, our result looks almost indistinguishable from the result built on single photons. We've just cleverly defined our quantities. The rate to go from a to b is Rabi frequency squared, but it is the two photon Rabi frequency. And the delta function is the delta function for the resonance-- the energy difference between state a and b-- but now not just minus omega 1. It is minus the sum of omega 1 plus omega 2 because we have stacked up the two photons in the two photon process. So let me just write that down. It looks like the one photon excitation but with suitably defined Rabi frequencies. So in other words, if you were interested just in the physics of two levels-- Rabi oscillation, you name it-- you can just say the same thing happens. The only difference is that instead of having a coupling directly by a matrix element, we are now coupled by this two photon Rabi frequency. And all your equation, you know, everything-- you can consider line shape, spontaneous emission, saturation-- all the phenomena we have discussed for single photon are analogous. You just have to use the density of state calculated for the two photons and you have to use the two photon Rabi frequency. OK. So I started out by telling you about two photon processes, two photon absorption. But what is maybe even more important in the way how it is used in experiments are Raman processes. So let me just show what I mean. If you have a state a and b, this can be two different vibrational states of a molecule. It can be two hyperfine states of an atom. Or if you think about a break process, it could be the same internal state of the atom-- the same hyperfine state-- but with two different momenta. Then it is a Raman process only in the external degree of freedom, in the motion away function. The only change is you change the momentum. We need our intermediate state, which is often an electronically excited state. We are detuned. And now we have one photon going up and one photon going down. Historically, people distinguish between the situation where the final state is lower or higher in energy. One is called the Stokes process. The other one is called anti-Stokes. But as long as you use laser beams to stimulate it, you don't even care which state is higher or lower in energy. But if you work in molecules with a thermal ensemble, then you have certain states thermally populated and others not. And then it makes a difference whether you start from the ground state a or from an excited state b. OK. So actually, everything for the Raman process is completely analogous-- is completely covered, actually-- by what I wrote down for you in perturbation theory. It's just if you had kept all the 64 states-- to go up with one photon and down with one photon one was one of them, but we discarded it because we were only interested in going up. So in other words, what was previously one of the counter-rotating terms, where omega had a plus sign and omega two had a minus sign, now it becomes a resonant term because we have arranged our two levels a and b in such a way that the near-resonant process is that one. So in other words, I mention it to you and it's just getting too messy to write it down, when we have e to the i omega t and e to the minus i omega t, I mentioned once to you is the sign plus or minus means whether we absorb a photon or whether we emit a photon. If you use a fully quantized picture with a and a degas, the a for the quantized description becomes an e to the i omega t. In the semi-classical description of the a dega has a minus sign. So therefore, if you look at all the combinations between plus i omega t, minus i omega t, for this Raman process we want to select e to the minus omega i and e to the plus omega too. And that means we are focusing on this process. Therefore, everything for the Raman process is analogous to the two-photon absorption process. The only thing we have to do is we have to change the sign on the second frequency because the second photon is emitted in a stimulated way and not absorbed. And therefore, our detuning is the detuning from the Raman resonance. And therefore, OK, back to Fermi's Golden Rule-- the rate in Fermi's Golden Rule is the matrix element squared times the spectral density indicated by the delta function. The delta function is now at the frequency, which is given by the two-photon detuning. And the two-photon Rabi frequency is exactly the same what we had for two-photon absorption, the product of single-photon Rabi frequencies divided by the detuning. Any questions? Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Well, we are usually talking about-- in an atomic system-- about very narrow resonances. And we are working hard on our lasers to be close to one resonance. And it would be an amazing coincidence if accidentally it would be in resonance with another one. But I would say, if you had a situation-- let's say in a molecule, which is a high density of state, the Raman process would be a rotation or vibration of Raman process involving the ground state. And if you're unlucky and don't choose your lasers wisely, the two laser photons could get you high up into an electronically excited state. And this may have some detrimental effect, depending what you want to do. But in general, I would say if you have more than one process, there is no interesting interference term. You just get two different rates. One is the two-photon Raman rate and the other one is the two-photon absorption rate. And you just have both simultaneously. They're not leading to the same final state. If something leads to the same final state-- this is more subtle because you kind of cram the interference effects. But we'll discuss some of those things in our next chapter on coherence. Any other questions? OK. I want to now take it one level higher, where we talk still about two-photon processes but we are allowing one of the photons to be spontaneously emitted. Again, we don't have to learn new things. We just have to map it to knowledge we already have. And let me sort of do it in the following way. I just want to sort of give you a clear understanding what this expression for the two-photon rate is. If we assume laser one and laser two are near-resonants with a transition a to k and k to b, respectively, we can sort of look at the two-photon process in the following way. We can say the photon omega 2 cannot be absorbed by the initial state. It can only be absorbed by the initial state because omega 1 mixes in with a certain probability, the state k into the ground state. So if you would first forget about it and you just do perturbation theory, then you would say in perturbation theory with the field one, the state a has now a probability, given by this term, that the state a has now an admixture. And now, if we have sort of-- we have dressed up our state a with admixing for the near-resonant field some probability of state k into it. And this stressed state now has sort of a stepping stone here. And from this stepping stone, it can now absorb the photon omega 2. So that's how we should think about it. I'm treating this dressing up of the initial state just a perturbation theory. And that's why everything was in one formula when I applied perturbation theory. But as we especially cover in the second semester of the course, you can also say if omega 1 is very, very strong, you can exactly diagonalize the Hilbert space of states k and a. And this is called the crest atom picture. But again, what happens is you mix those two states. And it is the admixture now in a non-perturbative way of state k, which is sort of the stepping stone. And from this stepping stone on, you can absorb a photon omega 2. So we could actually-- let me just redraw this-- that we want to go to the final state b. But in this kind of picture I just suggested, I start with a dress state a. But what is relevant is only kind of this admixture. And from this admixture, we can absorb omega 2. The real state k is somewhere else. And so it looks like, actually, now a two-level system, where we go from the dashed line-- called the virtual state-- to the final state b. And let me just point out what this virtual state is. Well, you know already everything about it because everything which can be known about it is what we have derived in our formula. I'm just interpreting the perturbation theory I've written down to you. And if I now call it a virtual state, there is nothing more you can ever know about this state than what was in this formula. But it's maybe helpful to summarize it. Because we have a resonant with frequency omega 2 in this situation, it is clear that the energy of this state is where the dashed line is. It's not the energy of the real state k. The stepping stone is created with the first photon. And the dashed line is the energy level of the virtual state. What is its character? Spatial bay function and such? Well, it is exactly the intermediate state k. And what is the population? If you had a two-level system, we sort of start with 100% amplitude in state 1. But here, our population is diminished by the probability at which we have it admixed the state. So in other words, if I really wanted and I want to use this concept, I could make a simplified description of two-photon process. I would just say the two-photon process is just a similar photon process starting from the virtual state, and the virtual state is created by the first photon. Well, you would say, well, why do you do that? I think the picture I've just drawn for you is sort of helpful when we discuss now two-photon emission-- spontaneous emission. With two lasers, it's sort of simple. But with two-photon emission, we have the situation that we start in an excited state b. We have one laser, omega 1, and that's it. But we will find out that, eventually, the system populates the ground state. And one possible process-- and that's the one we are focusing on is-- that it was first emitting a photon in a stimulated way, but the second photon, since we're not offering any extra stimulation, had to be emitted spontaneously. Now, you say, well, how do I calculate it? I can write out long equations. But with the concept I've given to you, it should be clear that what we actually have is here, we have nothing-- let's just look at the first part as a dressed atom, an atom in state b with some admixture of state k. And this admixutre can now decay by a single-photon process. So in other words, you don't need to re-derive anything. You can just sort of use analogy to write down what is the spontaneous emission rate out of the state k. So what you would write down now is the rate for this two-photon emission-- one photon stimulated, the second photon spontaneous-- is simply the Einstein a coefficient, or the spontaneous emission rate out of this intermediate state gamma ka. But then we have to multiply with the probability that this state is present in the state b because b-- that's what we assumed has no direct matrix element to emit to state a. And now you should sort of be amazed about the beauty of concepts you have learned. We are talking about something which maybe before this lecture-- wow, one photon stimulated, one photon spontaneous. That must be complicated. But it's just that, except for one thing. And this is the following-- remember, you should always remember how we derived the formula for spontaneous emission. The physics of spontaneous emission is that you can put one photon in each of the empty modes and you have to sum overall modes. And what was important was the density of modes at the frequency. And now the frequency is omega. So when we calculated the spontaneous rate-- emission rate-- the decay of the excited state k, we had an omega cubed dependence at the resonance frequency-- at the resonant frequency for the transition ka. But now we are interested in the density of states at frequency omega. Well, you'll remember, two factors of omega are from the density of states. One came from translating for momentum matrix element to-- no, one was-- yes, it was set. But one was pretty much the single-photon Rabi frequency. And this is also at the frequency of the photon. So you just have to correct for the omega cubed factor. And this is our result now for two-photon emission. OK. This may be even more relevant at least in the research which is done in my group and in other groups at MIT, are again Raman processes. We often have Raman processes-- you know, you need a reason why you want a two-photon. If you can't reach the upper state with a single photon like people in [INAUDIBLE]-- they may just use two photons. But this is more limit because they don't have the laser, which can bridge the gap. In situations where you work with alkali atoms, we are often very happy with-- we have one resonant line and we can do all the laser cooling, everything we want. But often, we don't like that the line width of the excited state is very large. And therefore, certain precision work, where we want to be very accurate in what we're doing with the atoms, cannot be done on the d1 or d2 line. And therefore, we often use a two-level system, which consists of two hyperfine states because then there is no broadening to do spontaneous emission. So one motivation why we again and again and again consider two-photon processes in our laboratory is because it gives us excess to very narrow resonances. But I think you get the gist now, and it will become even clearer later on, that often when you do a transition between two ground state levels, a lot of the physics is the same as of a single photon. You just replace your single photon Rabi frequency by two-photon Rabi frequency, and suddenly you can do everything you ever wanted to do with a single photon. But now with the benefit of having a very narrow resonance. So anyway, therefore, I think the most important aspect of two-photon processes 99% of the research people in our field are involved is actually in the form of Raman processes. So therefore, but for pedagogical reasons, I just like to start out with two-photon process-- one photon stimulated, one photon spontaneous. But now let's just fold it over, and we have the lambda type transition. The first photon is absorbed and the second photon is spontaneously emitted. I think as you realize when you go from the two-photon absorption to Raman process, there's nothing you have to re-learn. You just have to be careful with the signs of omega 1 and omega 2. So if I would ask you now what is the rate-- what is the rate of the spontaneous Raman process, well, it is the probability to be in the intermediate state. And this probability is just re-writing in a different way what we have used, is the Rabi frequency of the first step squared, divided by the detuning squared. So this is the probability to be in the excited state. Now, the spontaneous emission occurs with Einstein's a coefficient connecting the intermediate state to the ground state. And then, of course, as we just learned, we have to correct the spectral density and such, or we have to use for the calculation of the density of modes. The correct omega factors-- we have to calculate it at the frequency of the emitted photon. This is actually also-- this kind of spontaneous Raman process-- has been very important historically. Before the advent of lasers, all you had is light bulbs, strong light bulbs, maybe mercury lamp which put a lot of light into the mercury light, and at least it was somewhat spectrally narrow. But still very, very broad. And you couldn't really resonantly, you know, stack up two light bulbs and have enough spectral power to excite to a certain state. But look here-- you could still, with a strong light bulb, create an admixture of the excited state. This virtual line was terribly broad because of the width of the light bulb-- spectral width-- but then this spontaneous photon was just compensating for it. So in other words, it should be obvious-- this process only depends on the power for the first step. And it doesn't really matter if the power is delivered by a laser or a light bulb. The rate for this process is the same. And this actually was the discovery by Raman, which was rewarded with the Nobel Prize, for suddenly observing when you excited molecules with a very strong light bulb, you suddenly saw very different frequencies of photons coming out. And this was a landmark discovery. OK, what is next? I've already written it out. So that's also important for a lot of research within the CUA. When we simply want to change the momentum state of an atom, we have two lasers. We go up and down. We are not changing the internal state. But it is still a Raman process because state a and b differ by the photon recoil. So we're not going back to the same state. We are going back to the same internal state a, but it may have 2h [INAUDIBLE], two-photon recoil different. And therefore, as long as, in quantum physics, one quantum number is different, it is a different state. And therefore, if you just think about it Rayleigh scattering resonance fluorescence, that you go up and you go down to the next state. You may think you have your favorite atom, and you go up on a cycling transition. Well, when the atom goes up and then emits a photon on a cycling transition, there is recoil involved. So actually what you're doing is on the cycling transition, you cycle it through many, many spontaneous Raman processes. So this is the correct description of resonant fluorescence and Rayleigh scattering. Any questions? Good. I've just mentioned that when we do Reyleigh scattering, we have to consider that the photons have momentum, and this takes us to another state. Let's now be a little bit more careful and consider what is the role of the momentum in the transition-- in the two-photon process. And in particular, I want to come back to this Fermi's Golden Rule formula and include in this spectral profile, which in the simplest case is always the Lorentzian. But I want to include now Doppler shifts. So in other words, what I want to do now is I want to talk about recoil and Doppler shifts in a two-photon process and see how it will affect the line shape. All we have to do is-- or maybe let me back up. You should maybe consider what I've discussed so far, is the situation of an atom which has no motion. We could just fully focus on the internal degree of freedom. And just to remind you, we have discussed two ways how you can eliminate motion out of the picture. One is assume the atom has infinite mass. That's one possibility. The second one is assume the atom is tightly localized in an ion trap, deep in the [INAUDIBLE], that it is localized to less than the wavelength. And actually, the two ways of how you can eliminate recoil in Doppler shift are actually the same. When I said, give the atom an infinite mass, well, you give the atom an infinite mass by tightly connecting it to the laboratory. And this is what tight confinement in an ion or atom trap does. Because then the recoil is absorbed no longer by the atom, but by your experimental structure, or by your whole laboratory, or even by the building, if you want. But now, we are kind of going beyond this restriction. We are now saying, OK, now we allow the atom to move. And we can deal with that by simply saying when the atom has a velocity v, we can transform-- we can just use the Galilean transformation and say OK, the physics is the same. However, the atom, due to its velocity, sees a slightly different frequency. So therefore, we have our Lorentzian. But now, we calculate our Lorentzian by using the frequencies perceived by the atom. The different signs-- plus minus-- are, of course, distinguishing whether we have two-photon absorption or Raman process. And the frequency shift, going into the frame of the atom, gives us Doppler ts k1v and k2v. And now you'll see that there is something which is potentially interesting, and I want to discuss that. If k1 and k2 end up here with a minus sign, it may eliminate Doppler shifts, maybe even completely. So this is something new. If you have a single photon, you always transfer momentum to the atom. But if you have two photons, the two momenta can cancel. And this is actually a powerful method to avoid Doppler broadening in spectroscopy. So let me elaborate on that. The message you get from this formula-- it's actually much easier to say it in words than to write it mathematically because it's all hidden in the plus minus sign. But the gist is that two-photon absorption or Raman processes-- so any kind of two-photon process-- they are like single-photon transitions. And we've already discussed it, that we said we have Fermi's Golden Rule like for single photon. It's just we have to use the suitable definition of the two-photon Rabi frequency. And we had also seen that we can treat, actually, the two photons at one. But the frequencies, sort of, of this photon-- if you want to regard the two-photon process as a super photon, the frequency of the super photon is now either the sum or the difference off the two frequencies, depending whether we look at two-photon absorption or Raman process. But what we see now from the Doppler shift formula, that we can apply the same also to the momentum. It is as if we had a super photon, which drives one transition, but the momentum of this super photon is now the sum or the difference of the two momenta of the photon. And if you wonder, how do you sum them up? What really matters is what is after the two photons have been exchanged, what is the total momentum transferred to the atom. So what appears here-- k1 plus minus k2-- is the total momentum transfer to the atom. And you see that if you have two-photon absorption, if the two laser beams are counter-propagating, the total momentum transfer is zero. But if you have a Raman process where you absorb one photon and then emit it, the momentum transfer is zero, assuming similar frequencies, when the two Raman beams are parallel. So in other words, the situation without momentum transfer-- and you will see in a moment, or I've already said this is a situation where you are Doppler free-- for two-photon absorption the geometry is counter-propagating. For Raman processes, it is co-propagating. Then you have no or minimal momentum transfer to the atom. OK. So this total momentum transfer is minimized for k1 equals minus k2 for two-photon absorption, or for k1 equals k2 co-propagating laser beams in the case of the Raman process. Oops. Now you have it twice. Let me just focus, because there is very special interest in that, for precision spectroscopy of hydrogen. Let me assume we have just one laser, which produces-- but we want to dive a two-photon process with it. In this situation, omega 1 equals omega 2. The momentum transfers are the same. And if we arrange for the two photons to be absorbed from opposite directions, we reach the situation where the Doppler shift is really zero. So this is the way where we do Doppler free spectroscopy. And two-photon spectroscopy is one of the handful of methods of practical importance for eliminating the first order Doppler shift. So if you take an atom-- and let me just quote Dan Kleppner research, where this is the hydrogen atom-- and you have two laser beams from opposite direction. How will the spectrum look like? Well, we have the feature, which I just emphasized, that the two photons-- one is absorbed from the left and from the right, and therefore you get a very, very sharp line. Really sharp. I will talk about it in a second. But you cannot, of course, suppress the process where you absorb two photons from the left or two photons from the right. And therefore, you have sort of a broad pedestal. So the pedestal is where you take one photon-- both photons from the same side, whereas the Doppler free peak is where you have photons from counter-propagating directions. Since hydrogen is of methological importance, measurements of-- fundamental measurements of-- the Lamb shift, comparisons with QED calculations, measurement of the Rydberg constant-- these are all done by hydrogen spectroscopy. So therefore, it is very important to have precision method which suppresses the Doppler effect. However, let me point out that once you have completely eliminated the first order Doppler broadening, you are then limited by the second order Doppler effect. And as I pointed out, there is no geometry-- no arrangements of beams, no tricks you can play-- because one contribution to the second order Doppler effect simply comes from time dilation, that in the frame of the atom, relativistically speaking, the clock ticks differently. And therefore, the spectral line is different. So this is what we had discussed already for the second order Doppler effect, and this is not suppressed by two-photon. If you now estimate what is the relative line width-- so what is the delta, the line broadening you to the second order Doppler effect, in relation to the transition frequency? So just give me a second. Yeah. Then this omega cancels and we have an expression which is sort of interesting. It is mv squared. It's the energy-- the thermal energy-- but since we have normalized it to the transition frequency, it becomes now the thermal energy relative to the rest mass of the atom. You would think, well, this must be really tiny. Well, it is tiny. If you take room temperature, it is 2 times 10 to the minus 11. But people are now looking for precision in optical clocks, which is in the 10 to the minus 15 range. So you really have to be able to know the temperature, know the kinetic energy distribution of the atom, and be able to correct for it. Or ultimately, if you can't correct for it, you would not be able to correct for it at room temperature because you can't know for certain that the velocity distribution of the atoms in your laser beam is exactly at room temperature. You really have to go to low temperature, go to cryogenic temperatures. And in the famous experiments in Munich and [INAUDIBLE] group, the typical situation is for the 1s to 2s transition, they observed residual line broadenings on the order of a few hundred Hertz. This is not limited at all by the natural lifetime of the 2s transition. As you calculate in your last homework assignment, the lifetime of the 2s state is actually due to two-photon emission. Amazing. Two photons in series. This is the way how the 2s state decays. And you will, actually, with this rather simple description, get a fairly accurate estimate for the lifetime, which is a fraction of a second. And [INAUDIBLE] group uses a cryogenic experiment. They cool the hydrogen by collisions with liquid helium, cooled vaults, to maybe a Kelvin or so, three hundred times below room temperature. And this has been important to reach this precision. And still, I think, even at Kelvin temperature, the second order Doppler shift is one of the important-- is one of the important systematics. OK. This is what I wanted to discuss with you about two photons. Any questions? Yes. AUDIENCE: Just-- very lightly-- I would have thought [INAUDIBLE]. PROFESSOR: That's correct. It really depends what you want. If you simply want an atomic clock, all you want is a very, very, very stable reference point. And people use for that [INAUDIBLE] and strontium, or the people who do ion traps use the aluminum ion. All they want is a spectral line, which is sufficiently sharp, sufficiently narrow, and also insensitive to magnetic fields and electric fields. And so they select all over from the whole periodic table what they want. But people who want to measure fundamental constants-- the Rydberg constant-- want to compare lame shift with first principle QED calculations, sort of test the precision of quantum electrodynamics, verify that quantum electrodynamics is a complete description of atom photon interaction, they can only deal with simple systems where all the calculations are possible. This is the case for hydrogen. And, actually, with some advance in the numeric calculation of wave functions and all that, it may also be possible to do it with helium. I know in the literature people have often suggested helium have pushed the precision of two-electron calculations further and further. I think so far it hasn't kind of-- helium has not replaced hydrogen. It's still hydrogen. But this sort of tells you what the choices if you want to test fundamental physics or determine fundamental constants. In all atoms other than helium and hydrogen, you would be limited by the infeasibility of many electron calculations. Yes? AUDIENCE: Going back to the two-photon Raman process where you had the second spontaneous emission, I just want to clarify. So in the past, all this time when you were talking about off-resonant single photon scattering, was this actually, really, the more descriptive picture or was there actually a different physical-- PROFESSOR: No, this is off resonance scattering. And if you ask me, when do you have a situation where you first absorb the photon and then emit it, I would say, I would like to know that. I don't think there is a situation where this is possible. You always-- you should always use a two-photon picture. Or the only situation, if you press a little bit harder, where I think you can think in first absorption and then emission, is if you have a gas with lots and lots of collisions, the first photon may not be fully resonant. But then you have a collision and the collision stabilised, provides the missing energy or takes away the extra energy. And then with something which is much more complicated than you ever wanted, you have now truly an atom in an excited state, which has completely lost its memory from how it was excited. And yes, for this atom, it will now simply spontaneously emit a photon. But in all other situations where you don't have any loss of coherence or something in between, you're never really going to the real state. You're always going to virtual state. It's all two-photon. And, I mean, we discussed that. Remember the clicker question where most of the class was confused when I said we go up-- if we're exciting the atom? But now we are asking, what is the spectrum of the emitted photon? It is a delta function at the drive field, and we discussed it at length. If you allow in your head any picture of first absorption then emission, that's where all the wrong answers come from. This is really the picture behind it. This is what happens. Photon in, photon out-- should be described together, unless you have-- and I'm just repeating myself-- something in between, which is sort of some form of de-coherence which decouples the two processes. OK. But that's a wonderful opening to our next chapter, namely coherence. To what extent-- we just discussed one aspect of it and we come back to this in this chapter-- to what extent is the photon which is scattered coherent with the incoming photon? So I want to feature, in this last big chapter of this course, coherence in all its different manifestations. I think this is rather unusual. I don't know of any textbook or any other course where this is done. But this is similar in spirit to what we did on line broadening. I felt I could create spatial connections by discussing all possibly line shifts and line broadenings together. And now I hope you will also see certain common traits if I discuss together all the different manifestations of coherence. So what we want to discuss is-- we start out by talking about coherence in a single atom. We can have coherence between two levels. Usually, I try to stop at the simplest possible manifestation. But when we talk about coherence, I cannot stop at two levels because there are many new qualitative features which come into play when we have three levels-- like lasing without inversion, like electromagnetically-induced transparencies, for those of you who have heard about it. On the other hand, I can reassure you, I don't think there's anything fundamental to be learned by going to four, five, and six levels, so we will stop at three levels. We can have-- so this is a single atom. But we can also have coherence between different atoms. And phenomena we want to discuss is superradiance, which is very, very much related to the phenomenon of phase matching. Everybody who frequency doubles a laser knows about phase matching and phase matching conditions. You have to rotate the crystal or heat the crustal to the temperature where the whole crystal-- all the atoms in the crystal act coherently. But it's very related to superradiance. There is a third aspect of coherence between atoms which I will not discuss this semester, and this is the situation of Bose-Einstein condensates and macroscopic wave functions. This is covered together with quantum gases in-- this is discussed in the context of quantum gases in the second part of the course. So having these very different manifestations of coherence, I want to try now to give you a definition of coherence. But it's a bit difficult because I want to cover with my definition all the cases I know. But with those examples in mind, for me coherence-- we have the phenomenon of coherence. Coherence exists if there is a well-defined phase. Well, if we have a phase-- a well-defined phase-- it's always a phase between quantum mechanical amplitudes. So we need two or more amplitudes. So coherence exists if there is a well-defined phase between two or more amplitudes, but we can only observe it if those amplitudes interfere. And it can be two amplitudes describing two different atoms, or it can be two amplitudes of two states within the same atom. But I will point out to you-- what is really relevant is an indistinguishable ability that those two amplitudes are involved in two branches of a process which has the same final state. And like in Feynman's double-slit experiment, you don't know which intermediate state was taken. And that's where coherence manifests itself. So that means when we observe an interference, that means we obesrve-- and this is how we read out of a coherence-- one observes a physical quantity, the population in a certain quantum state, total electric field emitted, but this quantity is usually proportional to the square of the total amplitude. And that means we get an interference term. So coherence is important. Let me provide one additional motivation that coherence is an important technique and important tool for measurements. In a way, it's subtle but trivial at the same time. Whenever we do spectroscopy, we are actually interested in doing a measurement of energy. We want to measure energy levels. And those energy levels can tell us something about magnetic fields through Zeeman shifts. If you're addressing energy levels in the gravitational field for atom interferometry, the energy levels reflect gravitational fields. Or if we are not interested in-- if we try to eliminate or shield the atoms from magnetic fields and we just want to get the most precision in a reproducible energy level, this is the situational of atomic clocks. So pretty much when we use atomic spectroscopy for any application, we are interested in the energy levels. But this is very deeply connected to coherence and the phase because the relative phase between two states is nothing else than the time integral over the energy difference between two levels. So the phase evolved between the two levels is the difference frequency times time or time integrated dt. So therefore, when we are talking about coherence, how can we maintain longer coherence between energy levels or how can we create coherence in three level systems? This is actually intricately related to the fact that we can obtain more precise information about the energy levels. Anyway, this is a very general introduction to coherence. Before I talk about manifestations of coherence, I have two clicker questions because the first form of coherence I want to discuss is the coherence of-- the coherence involved in exciting atoms and the atom emitting light. It's related to the spontaneous emission and scattering problem. So just to sort of figure out what you know already, let me ask you something about the nature of spontaneous emission. The first choice is spontaneous emission, is nothing else than a unitary transformation-- unitary transformation or unitary time evolution-- of the wave function of the total system. And option B is spontaneous emission introduces-- well, maybe through a master equations or optical Bloch equation-- introduces a random phase into the time evolution of the quantum mechanical system. So what is the picture you have on spontaneous emission? OK. Stop. OK. So at least half of you better pay attention now. So the answer is whenever you have a system and it couples to the electromagnetic field-- you just put a system in an excited state and you wait. The 100% unique and correct answer is the system involves with the following operator, and this is the operator we have discussed many times. This is the operator which completely describes the interaction of an atom with the electromagnetic field. And since the whole system is completely described by this Hamiltonian, the whole system undergoes a unitary time evolution. So if you talk about the total system consisting of the photons in all electromagnetic-- in all relevant modes-- and the atomic system, this total quantomechanical system has a unitary time evolution with this operator. And to the best what our knowledge, this is a complete description which covers all aspects of the system. OK. But, and this is now the next question, there is a certain randomness in spontaneous emission when we go to the laboratory and look at the spontaneously emitted photons. And this is actually what I want to work out with you in-- maybe even today, I think ten minutes may be enough-- what is really the information-- the phase information-- which we have in a photon, which has been spontaneously emitted. I know how to phrase-- the randomness of spontaneous emission. Well, let me write it down. First a very big disclaimer. This question does not contradict the first one. The fact that we have a unitary evolution with this operator is 100% or 110% true. But this operator will actually lead to final states of the photon field, which may not have a specific phase. So if you say there is some intuition that there is something going on with the phase, you may be correct. But everything which is going on with the phase is the result of a unitary time evolution. The system itself is described by an operator, by a Schrodinger equation for the total system. But the question I have for you now is if you detect, let's say, the photon emitted in spontaneous emission, the randomness of spontaneous emission, the-- let me call it loss of phase, or at least the diminishment of the read out. There may be situations where we have a laser beam which has a well-defined phase, photons are scattered, and we just cannot retrieve the phase of the laser beam by looking at the photons. So this is what I mean here. Also, the photons can't come out of a unitary time evolution. My question now is, is the randomness or this loss of phase of spontaneously emitted photons-- and now we should, I want to know your best guess-- what is it due to? Is it only due to the-- does it only occur, is it only due to the measurement process of the photon? Or is it due to performing sort of a partial trace ever-reaching over certain states? So if you're interested in the photon, maybe tracing out the states of the atom. Or ever-reaching over modes of the electromagnetic field. And question C is both is actually possible. So if you look at spontaneously emitted photons and they're not perfectly phased coherent-- they're not reproducing the phase of the laser who has created them-- what is the reason for that? Is it always the fundamental reason or is there no fundamental reason? It's only a kind of reason of ignoring information, taking a partial trace? OK. That's pretty good. Yes. What I want to emphasize is that A is very, very important, and I want to discuss it now. But B is always the case. If you ignore the position where the atom has scattered the light-- if atoms scatter light and they are wavelengths apart, then you have maybe optical path length differences. The photon from the laser hits an atom here. It goes to your detector. But from another atom, the photon has accumulated a different spatial phase into the IKR. And, of course, what you get here is the random phase. This is why quite often when we scatter light for many, many atoms, we're not even asking for the phase. We say, one atom scatters light-- really scatters light at a certain intensity, i1. And n atoms-- well, we get n times the light. You immediately perform an incoherence sum because you sort of know deep in your heart that there won't be any interference. So it's always possible, of course, to lose the phase by not controlling every aspect of your experiment. So B is always possible. But for so many reasons that I don't want to discuss it, but the measurement process is very relevant. And so, therefore, I would have answered C here. But let's now discuss the most fundamental situation. And the most fundamental situation is we take our favorite atom with infinite mass-- no Doppler shift, we just put it. And we put it in a cavity that it can only interact with a single mode of the electromagnetic field. So this is a fundamental situation. And a lot of the general situations, you get by just summing up over many modes, summing up over many positions of the atom, introducing Doppler effects and all that. It just messes things up. And this is more in the spirit of answer B, that you perform partial trace and average over many states. But let's now pinpoint what I think is intellectually the most important one, the pure situation where an atom is just talking to a single mode. So let us assume that we have an atom which starts in the ground state, but now we excite the atom. And we excite it by a short pulse. And this can be a pulse which has a pulse aimed between zero and pi. And depending what the pulse angle is, it will admix-- we have the ground state and it will admix something of the excited state. And in case of a pi pulse, we have 100% in the excited state. So our atomic wave function is this. The excited state-- let's just assume the photons are in resonance. We know how to deal with off resonant lasers. So therefore, the phase evolution is e to the i omega naught t. But now, and this is what coherence is about, there is a very specific phase. And this phase phi comes from the laser. If you excite the atom with a laser beam but it has a phase shift, then the atomic wave function is phase shifted because every amplitude you admixed into the ground state in form of an excited state was driven by the operator-- the dipole operator, e-- and e, the electric field, has the phase of the laser beam. Sure, there may be-- and kind of all other trivial factors I've set to one here. But there is the phase of the laser, which directly is imprinted into the phase of the wave function. OK. So this is what the atom does-- what the laser beam does to the atom. So we have now an atom which is partially excited. And it carries is an imprint of the phase of the laser. And now after the laser pulse is over, the photonic part of the wave function is a vacuum. We have no photon in our cavity. And now we wait and we allow spontaneous emission. And spontaneous emission is nothing else than the time evolution with the operator I just discussed with you. So after spontaneous emission, well, one is we know for sure the atom is in the ground state. Let me write down the result. If we apply the operator which couples to the electromagnetic field, and we assume only co-rotating terms here. Let's just neglect counter-rotating terms, which can be-- which in near-resonants are irrelevant. What happens is we are now propagating this wave function. And the ground state-- with the ground state of the photon, does nothing. However, the excited state with zero photon, we discussed that excited state with zero photon will actually do Rabi oscillation. Ground state with one photon, excited state with zero photons. And so we have now our knowledge from the vacuum Rabi oscillation that this part of the wave function does nothing. This part of the wave function undergoes single photon vacuum Rabi oscillations. And if we start out with the superposition of ground zero and excited zero, well, this is a superposition principle of quantum mechanics. We can just propagate this part, and we can propagate that part. So what I suggest is when we, really at the fundamental level now, discuss spontaneous emission, we allow this part-- excited atom, empty cavity-- to undergo half a vacuum Rabi oscillation. And then the excited state is in the ground state and the photon state has one photon. It's a completely coherent Rabi oscillation. I just allow half a cycle to evolve. And the result of that is, well, nothing happened to this part. And the Rabi oscillations have now taken us to the one photon state. And it has just swapped excited zero to ground state one photon. So let me write down something which is really remarkable, and then we discuss on Monday-- next Monday-- we discuss how we would really measure the phase. But just look at the two expressions I've underlined. And this tells us that the quantum state of the atom as a two-level system-- ground excited state with all the phase factors-- has been now exactly matched on the quantum state of the cavity. So if you regard ground and excited state as a two-level system, every quantum mechanical subtlety of the atomic system has now disappeared. It's in the ground state. But everything which was coherent, which was a phase which was interesting about the atomic system, has been transferred to the photon field. Yes? AUDIENCE: Do you want to have an alpha e there also? PROFESSOR: Oh yes, please. Thank you. When I said everything, I meant everything. Yes. Yes. OK. So let me write that down and then we have-- so this is what I meant. This is the most fundamental aspect of spontaneous emission, that the quantum state of the atom has been perfectly matched. Perfectly mapped onto the photon field. And the one thing we have to discuss on Wednesday is the whole of the phase phi. Phi-- we started out by phi being the phase of the laser. And if the laser is in a coherent state, I will talk about on Wednesday, in a homodyne measurement we can measure phi to any accuracy you want. We can determine the phase of the laser in a homodyne or heterodyne experiment. This phase phi has been now perfectly imprinted into a two-level system for the atom. And now it appears mapped into a two-level system for the photons-- the two-level system between zero photons and one photons. But if we are now doing a measurement either on the atomic system or on the photonic system, we are limited in the accuracy at which we can retrieve phi. And this is what we want to discuss on Wednesday. And this is what I referred to as the fundamental limit of spontaneous emission because we have not lost any coherence here. It's just if the phase is only imprinted in one particle-- one particle quantum physics sets us a limitation. Oh well, we can read out the phase phi. OK. Any question? To be continued on Monday.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
18_Line_Broadening_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So we want to discuss line shifts and line broadening. And I assume you'll remember that last class we did some brainstorming and came up with quite a number of line broadening and line shifts mechanism, temporal lifetime broadening, motional broadening, external field broadening, collisional broadening. And as I indicated at the end of last class, those different broadening mechanism have much more in common than you may think. And you will see that they have much more in common when we look at it from the fundamental perspective of coherence. Any line broadening mechanism comes because the atom experience, the environment and the drive field as a coherent source only for limited coherence time. So the concept of the coherence time will actually provide a common denominator for all those line broadening mechanisms. However, before I present to you this kind of correlation function approach to what's line broadening, I think it's really important that we go through some simple cases. I always want you to learn phenomena in the simplest possible manifestation, in a situation where without any math you see what's going on. And what I particularly love is if there is an analytic solution where everything is sort of transparent here. So therefore, before we discuss in a comprehensive way line shifts and line broadening, I want to go through simple cases. Now, I decided for this class that most of today's lecture uses pre-written slides because those cases are so simple that I would be almost afraid I would bore you if I would write it out. Because if you look at this equation in one second you get it, and it takes me 10 or 20 seconds to write it. On the other hand, I would sort of give the responsibility to you. If I'm flooding you with too much information, ask me questions, slow me down, or say, can you please go over that? So I don't want to obscure things by going through them faster. I also looked a little bit forward to the end of the course. We are about OK with the pace of the course, or maybe one hour behind. And so in order to make room for discussions of superradiance, which I would like to have with you in a couple of weeks, I thought I can save half an hour here by going over that material a little bit faster. But these are really just illustrations of line broadening. And you should actually know most of them already. So the first cases are actually leading you up to the Ramsey resonance, to the method of separated oscillatory fields, which is really absolutely important for high resolution spectroscopy. But before we can fully understand the Ramsey resonance, we have to understand the Rabi resonance. So what I've written down here is for you just the well-known formula for Rabi oscillations. And if you now simply assume we let the atoms interact with the field by a fixed amount time tau in such a way that we have a pi pulse, then this probability as a function of detuning becomes a line shape. And this line shape is plotted here. It has a full width of half maximum, which is approximately 1/tau. And this is-- you can regard it as the Fourier limit of finite interaction time broadening. We are observing the atoms for time tau and the line which is on the order of 1/tau. But we also know from the Rabi formula that we have these side lobes, so we see some oscillatory line shape. OK, now, with the advent of trapped atoms, you can often have the situation that you have an ensemble of trapped atoms. You flash on your drive field for fixed amount time tau. At least for most of last century, this was not possible. You had atomic beams. And in an atomic beam, what is fixed is not the interaction time tau but the interaction length l. And then due to the velocity distribution, different velocity groups of atoms interact a different amount of time with that. So for conceptional reasons but also for the historic context, you should sort of have an idea what the Rabi apparatus is. I may have mentioned it, but I regard Rabi as sort of one of my ancestors in my family tree because Rabi was the PhD advisor of Norman Ramsey, who was the PhD advisor of Dan Kleppner, who was the PhD advisor of Dave Pritchard. And Dave Pritchard was my postdoctoral mentor. So I'm really talking here about my scientific great-great grandfather. Anyway, so the famous Rabi apparatus is the following. You use one Stern-Gerlach magnet to prepare a certain hyperfine state. Then you have the interaction region, and this is what we are focusing on. And then later, if a spin flip has taken place, you can figure that out by running it through a Stern-Gerlach analyzer. So it's this ABC region, and we are talking about this middle region where we have an interaction time which is now given by the length divided by the velocity. So what we have to do is-- well, that's why I'm saying it's simple cases, and I hope I can go fast-- you just take the previous result with a Rabi probability and convolute it with a velocity distribution. So when we do that, we find two effects. One is, well, due to the velocity distribution, the line width becomes a factor of 2 broader, but it's still proportional to something on the order of unity divided by the interaction time. And because the different velocity groups have a different kind of oscillations as a function of detuning, the oscillatory behavior now disappears because it becomes averaged out [? of ?] the velocity groups. OK so this is the Rabi method or the Rabi resonance, where we have one interaction pulse or one interaction time for an atom. The next method is now what was introduced by Norman Ramsey as the method of separate oscillatory fields. And for that and other contributions he was awarded the Nobel Prize in 1990. And the difference is the following. In the Rabi method, we let the atoms interact with the drive field for fixed lengths l or fixed time tau. But in the Ramsey method, we interrogate, we drive the atoms with two short pulses, which are separated. So in the simplest case, we do 2 pi over 2 pulses separated by a time t. And now, if you have two pulses, everything is coherent. The two pulses can interfere constructively or destructively. So what you would expect is that-- I want to give you two pictures. One is the Bloch sphere picture, but let's just sort of play with the concept of interference. If this is zero detuning, the two pulses are separated by a time t. And therefore, you would now observe, as a function of detuning, Ramsey fringes and oscillatory behavior, which has a spacing which is 2 pi over the time between pulses. The envelope of the whole fringes is related to the short time of the duration. It's like a double slit experiment. You know, each slit is broad, and the width here is given by 1 over the short time delta t of the pulse. But then the two slits interfere. And the interference pattern is the distance between the slits, which now in the temporal domain is the time capital T between them. So these are sort of typical Ramsey fringes. And now, if you would average over a broad velocity distribution, then you would kind of average-- you would maybe see one or two side lobes, but the other fringes are averaged out. So the central peak, the resolution is again on the order of 1/T. It's the total time of the experiment, which is setting the ultimate resolution, similar to the Rabi resonance. So this uses the picture of interference between two pulses. But I also want to sort of give you the Bloch sphere picture, because it's beautiful. For that I need a little bit of room. So an atom enters the first Ramsey region, and it has to spin down. You do a pi/2 pulse, which is at 90 degrees. And now what happens is there is a field-free region between the two pulses, so the atom is now precessing at its resonance frequency. OK, but the synthesizer which is attached to your coils is also sort of precessing at this frequency. And when you add zero detuning, the synthesizer and the atoms are aligned again. And the second pi/2 pulse is now flipping the atoms, and you've 100% excitation. And this is what you see here. But now let's assume you are slightly detuned. Then the atom is precessing, and your synthesizer is precessing at a slightly different frequency. And now in the second region, the frequency of the synthesizer may be different with-- the phase of the synthesizer is 180 degree different from the phase of the atom. And then instead of flipping the atoms up and getting 100% in the up state, the atom is now flipped down. And this explains the first minimum. So you should really see that in this long region, nothing happens. But you accrue a relative phase between the synthesizer and the atom, which oscillates at its resonance frequency. So based on this model, you could work out mathematically every aspect of those fringes I've shown you. But I decided to take the equations out of the lecture and just present you with this physical picture. Questions? AUDIENCE: If you have a velocity distribution, would the points of those minimums change or is it still this-- PROFESSOR: Well, good question. The spacing is the time T. But if you have a velocity distribution, let's say a velocity distribution which has a width delta v over v of 30%, what we have kept fixed in the beam experiment are the lengths, the interaction lengths. So time is length over velocity. So therefore, when the velocity changes by plus/minus 30% in your atomic beam-- unless you take a supersonic beam with a very narrow velocity distribution-- then that means one velocity class has this set of fringes. The other velocity class has this similar set of fringes, but like in a harmonica, everything is now spread out by 30%. And if you have now 30% velocity resolution, that means you may be able to see two or three fringes. But the central fringe is sort of like a white light fringe. There is no relative phase shift. And therefore all the different velocity classes will have a maximum at zero detuning. So in the extreme limit of a very broad velocity distribution, the only feature which survives is the central fringe. But this is where you obtain your spectroscopic information from. Will? AUDIENCE: You have explained this Bloch sphere picture assuming unbroken coherence between your synthesizer and the field-free evolution of your atom? PROFESSOR: Yes. AUDIENCE: Is it equivalent to if you wrote down the time evolution operator, you would say that there's 0 Rabi frequency, but no detuning in your field? PROFESSOR: Yes. AUDIENCE: So what if I-- is this necessary, to have this unbroken coherence if I unplug my synthesizer and plug in a new one [INAUDIBLE]-- PROFESSOR: Yes, you can do spectroscopy with a resolution delta mu, which is 1/T, only if you have a synthesizer which has a frequency stability which is better than 1/T. AUDIENCE: OK. PROFESSOR: Otherwise, we've done this a little bit when we talk narrowband and broadband. When we talked about narrowband and broadband cases, you are always limited in your resolution by whatever is broader. Here I'm discussing about the width of the atomic system, assuming a perfect synthesizer. But then in essence, you should convolute this result with the spectral distribution of your synthesizer. And then if the synthesizer has a resolution which is worse than that, you would actually blur out the fringes through the convolution with the frequency spectrum of the synthesizer. [INAUDIBLE]? AUDIENCE: So that also means if you add a [INAUDIBLE] to the second coil, [INAUDIBLE] just like if you had [INAUDIBLE] to one of the-- PROFESSOR: Yes, actually, if you would add a pi phase shift to the second coil, then the minimum-- the central feature would not be maximum or minimal. Yes. And that's why when you have atomic clocks with a beam, the question of distributed phase shifts within the microwave cavity play a big role. And this is related to are the two Ramsey zones really at the same phase or not? So that's an important issue for ultimately making resolution spectroscopy. OK. Since-- OK. So we've discussed two methods now, the Ramsey method versus the Rabi method. And let me discuss advantages or disadvantages of the method. So if you're an atomic physicist and you have to give advice to your friends whether they should use the Ramsey or the Rabi method, these are your talking points. So one point is that I said the central feature of the Ramsey fringes is 1/T. The Rabi feature is 1/T, so we are both limited by time resolution, because this is a Fourier limit. But if you work out the details, you find that the Ramsey fringe is about 2 times narrower than the Rabi resonance. Just how things work out mathematically-- I can't give you any real deeper insight why. It just works out to be a factor of 2 narrower than the Rabi resonance. There's one important aspect, and this is the following. When you accrue the spectroscopic information, you compare-- I hope you remember my demonstration-- you compare the atomic oscillator to the synthesizer, but you're not interacting with each other. So therefore you don't have any power broadening. You're comparing the free evolution of the atom with the propagating phase in your synthesizer. And therefore, your Rabi signal has no power broadening at all. And that means that, however, the Rabi signal, at least for small detunings which are smaller than the Rabi frequency, will always depend quadratically on delta. Remember, generalized Rabi frequency, you add or make a Rabi squared and detuning squared in quadrature. So therefore you will always get effects for small detunings which are quadratic, whereas the way how you set up the Ramsey experiment, you can explore a linear dependence. So you have more sensitivity here. Finally-- well, not finally. There are more. Next is the Ramsey spectroscopy. It's done in a field-free region, so you're not driving the system. You observe the free evolution. Therefore you have no AC Stark effects due to the drive field. Of course, you drive the system in the two Ramsey zones. And some form of AC Stark effect may come in in the way that you may not have exactly a pi pulse due to AC Stark effects. But this is sort of a higher order effect. The basic spectroscopy is done by comparing the atomic oscillator with the frequency synthesizer. Since it's a field-free region, this region can now be used for-- well, let me say generally for experimental additions. And of course, what should immediately come to your mind is the Nobel-Prize-winning experiments of Serge Haroche where he had two Ramsey regions with microwaves. And in between in the field-free region, the atom passed through another cavity. And in between the Ramsey zones, the atom experienced a phase shift due to the presence of a single photon. So the field-free region could now be used to put in a cavity which was filled with one or two photons. And the atom was in a non-destructive way reading out how many photons were there. So this is another advantage of the Ramsey spectroscopy, that you can now use the field-free region to measure something. You can introduce a phase shift, which can then be read out through the Ramsey interference. Similarly, if you just think of how the resonance comes along, if you had a slightly fluctuating magnetic field between the two Ramsey regions, this would not necessarily broaden your signal. Because what you measure is the integrated phase evolution of the atomic oscillator. So in other words, what you get is shift of the resonance, which is the average over the inhomogeneous magnetic field. Whereas when you do a Rabi resonance, whenever you have a field inhomogeneity, you broaden and shift the resonance to this field value. So in other words, the Ramsey resonance depends only on the average along the mean energy separation between the two levels. And therefore inhomogeneous fluctuations, spatial fluctuations, can completely average out. Whereas in the Rabi method, any kind of fluctuations leads to line broadening. And finally, I will explain that in more detail later on. But what happens in the Ramsey method if the separation between the two regions is much longer than the spontaneous lifetime of one of the levels? Do you now get a resolution which is 1/T, the temporal separation between the two interrogations? Or do you get a resolution which is 1/tau, the lifetime of the excited level? What do you think? AUDIENCE: 1/tau. PROFESSOR: It would be a good clicker question. So who thinks it's 1/tau? Who thinks it's 1/T? So who thinks it's limited by lifetime? Who thinks it's limited by the interrogation time? OK. So that's a minority for the Ramsey method. The minority is correct. And the picture you should have is that you remember the sort of picture of those oscillators. But if some atoms decay away, it diminishes your signal. But the interference comes only from the survivors. And the survivors have survived in the exponential tail of the natural decay, but they are longer lived. And therefore, you can actually do spectroscopy which is narrower than the natural line width, using Ramsey spectroscopy. But if you do Rabi spectroscopy, you're limited by the spontaneous lifetime. And this is probably what was in mind of the other people who raised their hand for option A. But why there is a difference between the Rabi and the Ramsey method, that's something I want to discuss later. OK, so Ramsey has the possibility, sub-natural line width is possible when the interaction time is larger than the inverse natural line width. OK. Questions about Ramsey method? Good. Physics Today has a wonderful article written by Ramsey which was reprinted recently, I think on the occasion of his death. I will post this article to our website. Then you can really read about it in the language of Norman Ramsey. OK, let's move on. I said we are discussing simple examples. So we have discussed the example of Rabi resonance and Ramsey resonance. Now I want to talk about line shape with exponential decay. One reason why I wanted to give you a simple model for exponential decay because in the end, everything is exponentially decaying because of the finite lifetime of levels. And with this very, very simple model, I want to convey to you that not all exponential decays are equal. You have to be a little bit careful. And this is just sort of the simplest example. And you learn something by figuring out what is different from spontaneous decay here and what are the consequences of that. So remember where we are. We have the Rabi resonance. I gave you the simple example that the Rabi resonance is applied for fixed time tau. And then we did one extension we averaged over the velocity distribution. But now we can just say, OK, we have our Rabi resonance here. But we assume that while we drive the atom, they decay away. If you want, you can think these are radioactive atoms and they decay [? radioactively. ?] For that situation, this model is exact. So therefore, instead of having a fixed interaction time tau, you have a distribution, which is an exponential distribution. So all we have to do is we have to take our result, which I discussed 10 or 20 minutes ago, with the fixed interaction time and convolute it with the distribution of times the atom experiences the drive field. And what I've introduced here is the exponential is gamma. And the mean interaction time over which an atom interacts with the drive field is just 1 over gamma, and I called that tau. So now we had the Rabi probability here, but now we convolute it with the distribution of interaction times. And this is now the probability that after the Rabi pulse the spin was moved from spin down to spin up. Or if you have an electronic transition, from ground to excited state. So this integral can be analytically solved. That's why it's worth presenting. And what you get is a Lorentzian line shape. And this Lorentzian line shape shows power broadening, which actually you should find nice, because we will sometimes [INAUDIBLE] in power broadening you can't get out of perturbation theory. And a lot what we have done and what I actually want to do for the remainder of this chapter on line broadening is a perturbative approach. So that's another reason I want to present it to you here. These are some non-perturbative results, and they show the physics of power broadening, saturation broadening. But there are two things which are noteworthy. One is the full width at half maximum is not gamma but 2 gamma. So if we had natural decay at a rate gamma, the Lorentzian which we get is only half as wide. But you can immediately say, well, that can be understood because here I assumed the atoms just decay away no matter whether they are in the ground or in the excited state. And I gave you a model that you assume there is maybe radioactive decay independent of the internal state. And now you can wave your hands and say, OK, if you have only decay in the excited state and not in the ground state, this should give you a factor of 2 in the width. And this may explain why we have a full width of half maximum here of 2 gamma and in spontaneous decay it is gamma. But there's another thing which is interesting, maybe more interesting. And this is the power broadening. If you take the power-broadened Lorentzian line width and we look at it in the limit of high power, it is 2 times the Rabi frequency. Well, if we have a system which has spontaneous decay and we would go to the high power limit-- we've discussed it before. What you get is square root 2 of the Rabi frequency. So the message I can give you here is that saturation broadening, power broadening depends sensitively on the exact nature of the decay and of the lifetime broadening involved. And if you really want to do it right, you have to use the optical Bloch equations. So let me just write that down. So what we learned from that is, yes, we get power broadening. We have a simple model for power broadening here. But power broadening depends sensitively on the nature of the decay process. And so if you want to get this result without any assumptions or approximations, you should use the optical Bloch equations. And yes, your homework assignment looked at the optical Bloch equations. And I think you also found out that some results for the optical Bloch equations really depend on the ratio of gamma 1 gamma 2 or T1 and T2. So that is with a more mathematical formalism shows you that the way how you introduce decay into the atomic system, it's not just there is one time constant and the result just depends on the time constant. There are some subtleties. Questions about that? OK. Now-- and many people have asked me about it-- I think for the first time in this course we bring in motion of the atoms. So the atoms are now not pinned down at the origin. Maybe you can imagine you have an atom which is held in a solid state lattice with a nanometer, and we just look at the internal structure. We have maybe some ions implanted into a material. And these ions are fluorescent, and we are probing them. Or we're doing spin flip on nuclei, which are nuclei of atoms which are part of a condensed metal lattice. Or if you want to think more [? with ?] the methods of atomic physics, you have the most tightly confining ion trap in the world. You are deeply in the Lamb-Dicke limit, and your ion just cannot move. It's always in the ground state of the ion trap, and all you are dealing with is the internal degree of freedom. Actually, let me make a comment. I often see when people approach me and ask me question that they are not necessarily making the separation. When they think about what happens to the internal structure of atoms, what is in their head is, but there is motion, there is recoil. You can, in my experience, always separate the two. You can create a situation where you only probe the internal degree of freedom by tightly confining the atom, and then you can relax the condition that the atoms is tightly confined. Now the atoms can move, and then all of the things we want to discuss now come into play. Sometimes people assume, yes, but if you confine an atom, doesn't the atom always have a de Broglie wavelengths? And isn't that another length scale? The answer is no, because you need a coupling between the internal degree and the external degree, or you need some way of exciting the external degree of motion. And if you have a tightly confining ion trap, h bar omega, the next vibration or level in the ion trap is so high, you may not excite it with the recoil of a photon. But there is another limit which I often find very useful, and this is the following. When we talk about spectroscopy, spin flips, electronic transitions, we have not really talked about the mass of the atom. The mass of the nucleus only appeared in the reduced mass. Remember when we did last class the hydrogen atom? We had the reduced mass, which was slightly different from the electron mass. So if you want to completely exclude the motion of the atom, just work in the limit that the nucleus has infinite mass. If the nucleus has infinite mass, its de Broglie wavelength is 0. It's confined in a harmonic oscillator to sort of a delta function. So by just assuming that you deal with infinite mass, you automatically neglect all possible motions. And as you will actually see from the next formula, working in the infinite mass limit means that your Doppler shift is 0, your recoil shift is 0, everything is 0. So either way, I've given you now two ways how I recommend that you think about all the physics we have discussed which deals with the internal degree of freedom by either saying the atom is tightly organized. But then some people say, ah, but then it has Heisenberg Uncertainty. There's a lot of momentum. It doesn't matter. If it's localized, it's localized. But you can also assume just the infinite mass limit. And in both cases, the result is you can completely talk about so many aspects of internal excitations without even considering what happens externally. Any questions about that? So now we take the mass from infinity to finite value, and now we want to allow the atom to move and have kinetic energy. So let's start out very simple. We have an atom which is addressed in the excited state, and it emits a photon. Before the emission of the photon, the total energy is the excitation energy. But after the emission of the photon, the atom is in the ground state. The photon has been emitted. But now the atom, due to the photon recoil h bar k, has kinetic energy. So therefore the frequency of emission of the photon does not happen at the resonance frequency of the atom because some part of the electronic energy goes into the kinetic energy of the recoiling atom. And this is called the recoil shift. We start at 0 velocity. At 0 velocity, you don't have any Doppler shift. But you do have recoil shift. Well, we can play the same game. We have an atom addressed. It absorbs the photon. And after the atom has absorbed the photon, it's in the excited state. But now you have to excite the atom if you want to transfer the atom to the excited state with a frequency which is slightly higher than the resonance frequency. Because the laser has to provide not only the energy for the electronic excitation but also the energy for the kinetic energy at which the atom recoils. So therefore we find that, due to the recoil of the photon, the absorption line and the emission line for an atom addressed are shifted. The shift is the recoil energy, h bar-- the momentum of the photon squared, divided by 2 times the mass of the atom. And the shifts are opposite for absorption and emission. So therefore, if you look at the two processes for absorption and emission, there is a recoil splitting between the two. This recoil splitting between emission and absorption is just a few kilohertz. And it was really one of the wonderful accomplishments when high resolution spectroscopy came along and John Hall at Boulder managed to have lasers stabilized to sub kilohertz. For the first time, this recoil splitting could be resolved. So he had set up some intracavity absorption, and he saw sort of two peaks in some kind of spectrum. I don't remember the details, but two peaks split by a few kilohertz were really the hallmark of the photon recoil shifting the lines away from resonance. OK, so now we know what the kinetic energy of the atom does. If it emits a photon, there is recoil. But now, in addition, we can drop the assumption that the atom is initially at rest when it absorbs or emits. Now the atom is moving. But for that, we don't need any new concept. Because the moving atom-- we can just do a transformation into the frame of the atom where the atom is at rest. And then just using the relativistic transformation, we are now transferring the laser frequency from the atomic frame into the lab frame. So what I've written down here is the general special relativity formula for the frequency shift. And I've assumed that the photon is emitted at an angle phi with respect to the motion of the atom. So now we obtain-- OK, let me do a second-order expansion. Usually, our atoms are non-relativistic, so it's the first- and second-order term which are most important. And if we are now looking in the lab frame, what is the frequency where we emit and absorb photons. It's a resonance frequency minus/plus the recoil shift, which we have already discussed in isolation by assuming we have atoms at 0 velocity. But now the velocity of the atom leads to a first-order and second-order Doppler shift. If v/c is small, what is most important is the first-order Doppler shift. And in almost all cases [? where we ?] do spectroscopy, dominant line broadening effect comes from the first-order shift. However, let me point out that the first-order shift can be suppressed. One simple way to suppress it-- interrogate the atoms at an angle of 90 degrees. Have an atomic beam. And if you interrogate them with a laser beam at 90 degrees, the cosine phi is 0. k dot v is 0. And this is the oldest method to do Doppler-free spectroscopy. In your new homework assignment, you will discuss saturation spectroscopy. If you have a broad velocity distribution, but you find a way of labeling atoms with a certain velocity class, then you have created your own narrow velocity class where the Doppler broadening is absent because you've only one velocity class. And these methods of nonlinear spectroscopy, the concept will be developed in the homework assignment. Finally-- and this will be the next chapter which we talk in class here in about two weeks-- this is by having two-photon spectroscopy. To give you the appetizer, if you have two photons from opposite direction, the Doppler shifts. One has a positive, one has a negative Doppler shift. And if you stack up the two photons, the sum of the two Doppler shifts is 0. So two-photon spectroscopy provides you an opportunity to completely eliminate the Doppler shift. However, no matter what you pick here for the angle, there is a part of the second-order Doppler shift you can never get rid of. And this is something important to keep in mind. The second-order Doppler shift, at least one part of it, comes from the relativistic transformation of time. So if you have atoms moving at different velocities, time in the frame of the atom ticks slightly differently depending what the velocity is. And therefore, if you do spectroscopy, you measure time in the lab frame, but the atoms measure time in the resonance frequency in their own frame. Then there is inevitably broadening. So for instance, when people did Ted Hansch's experiment, the famous two-photon spectroscopy on hydrogen, high resolution, determination of the Lamb shift, the Rydberg constant-- one of the flagship experiments of high resolution laser spectroscopy. A limit is the second-order Doppler effect because of its relativistic nature. And the only way to suppress the relativistic Doppler effect is by cooling the atoms, reducing their velocity. So let me just write that down. So when we suppress the first-order effect, then the limit is given by the second-order term. And just repeat, the second-order term cannot be eliminated by playing geometric tricks, 90-degree angles and such, because it's fundamentally rooted in the relativistic definition of time. OK. Any questions about recoil shifts, Doppler shifts? Yes. AUDIENCE: What are the correct way to determine whether we're in tightly confined [? regime ?] or not? PROFESSOR: OK, the question is what is the criteria on whether we are tightly confined or not. I will give a full answer to that in about a week when I discuss with you in great detail the spectrum of a confined particle. And what we will introduce is it's the frequency of harmonic confinement. And we have to compare the frequency of harmonic confinement to two other relative, to two other important frequencies. One frequency is the recoil frequency. And another one may be the natural line width. So in other words, when we discuss the spectrum of confined particles, we can discuss it as a function of three parameters-- confinement frequency, recoil frequency, and natural line widths. And based on the hierarchy of those three frequencies, we find limiting cases. And we will find, then-- and this is probably what you are aiming for-- at some limit when you reduce the confinement in your harmonic oscillator, you will actually retrieve the free gas limit. Or to be very brief, confinement, you have the benefit of confinement. Confinement means the motion of the atom is quantized in units of h bar omega. So you shouldn't think about velocity. You should think about discrete levels. And I will show you that the spectrum which is broadened becomes a spectrum with discrete levels and side bands. As long as you can resolve the side bands, you can see them, you can actually address the line in the middle which has no motion blurring at all. But you have to resolve it. So one condition here now is that the harmonic oscillator frequency is larger than the natural broadening of each line. If the lines blur, you're pretty much back to free space and you've lost the advantage of confinement. But maybe we can discuss some of those aspects after I've introduced the line shape of confined particles. OK. OK. Let me now discuss briefly, or use what we have just discussed about Doppler shift to discuss the line shape in a gas. Well, of course, line shapes in a gas-- that's what all people observed when they did spectroscopy before the advent of trapping and cooling. But even now, we often have a situation that we have a thermal clouded microkelvin temperature. And what we see is still broadening due to the thermal motion. So therefore, let me just tell you a few aspects of that which you might find interesting. So one is when we have an non-degenerate gas, this is described by a Boltzmann distribution, sort of a Gaussian distribution of velocity. And therefore, if the first-order Doppler shift is relevant, the first-order Doppler shift is proportionate to v, so therefore the spectrum we observe is nothing else than the spectrum in velocity multiplied with a k vector. The Doppler shift is k dot v, and therefore the velocity distribution by multiplying it with k is turned into a frequency distribution. And so the classic frequency distribution you would expect in a gas is simply the Gaussian distribution. And the Doppler width, the spectroscopic width, is nothing else than whatever the characteristic speed in your Boltzmann distribution is, typically the most probable speed, 2 times the temperature over the mass multiplied with the k vector. And in many cases, it is this Doppler width which dominates. I've just given you typical examples here. If you've stabilized your laser to a room temperature vapor cell, you will encounter typical Doppler widths on the order of 1 gigahertz. This is 100 times larger than the natural line width, which is 100 times larger than the recoil shift. So that is the usual hierarchy of shifts and broadening mechanisms. So therefore, you would think that if the Gaussian line widths due to the velocity distribution is 100 times larger than the natural line widths which is described by Lorentzian, you can completely neglect the Lorentzian. But that's not the case, and that's what I want to discuss now. What happens is you have a Gaussian which is much, much broader. But the Gaussian decays exponentially, whereas your narrow Lorentzian decays with a power law. So just to give you the example, if you go two full half-line widths away from the center of a Gaussian, the Gaussian has dropped to 0.2%. The Lorentzian has still 6%. So therefore, what happens is if you have your gigahertz broadened line in a gas, but you go further and further away, at some point what you encounter are not the Gaussian wings but the Lorentzian wings. And that's maybe also sort of intellectually interesting. I've talked to you about homogeneous, inhomogeneous line widths. The bulk part of the Gaussian is inhomogeneous because you can talk to different atoms at different velocities. Each atom resonates with its own Lorentzian, and it is inhomogeneously broadened. But in the line widths which is due to the Lorentzian, the homogeneous broadening dominates. And since the Lorentzian has information about either the natural line widths or in a gas about collisional broadening, you can actually, far away in the wings of your line shape, retrieve information about collisional physics, which, in the center of the line, is completely masked by the first-order Doppler shift. OK. So I've already written it down. So far wing spectroscopy in the gas phase can give you interesting information about atomic collisions and atomic interactions. So having started out by telling you that it's a first-order Doppler shift which usually dominates, but then telling you that if you go far away from the line center, the Lorentzian actually dominates, that means now there are situations where we want actually both. And the general solution, of course, is you do a convolution. Each atom with a given velocity has a Lorentzian. And then you have to do the convolution with the velocity distribution. So therefore, the general situation for gas phase spectroscopy is the convolution of the Lorentzian for each atom and the Gaussian velocity distribution. And since this was the standard case which people encountered when they did spectroscopy in the gas phase, this convoluted line shape has its own name. It's called the Voigt profile. Colin. AUDIENCE: It's not obvious to me what exactly about the atomic collisions and interaction, the wings of the Lorentzian [INAUDIBLE]. Don't you just learn about the bare line? Like the un-Doppler shifted line itself? PROFESSOR: OK, that's a good question, as it is not obvious. And yes, the literature is full of it. Because if you don't have Doppler-free spectroscopy, if you are always limited by the Doppler broadening, put yourself back into the last century. But you really want to learn something about how atoms interact and collide, this was the way to do it. I don't want to go into many details because it's a little bit old fashioned. We have other ways to do it now. But I find it intellectually interesting when we talk about line shapes to realize there are maybe some subtleties we wouldn't have thought about it by ourselves. But just to give you one example, for whatever reason, you're very, very interested. What is the rate of collision of atoms? Let's just assume the simplest model for collisions that when two atoms collide, the excited state is de-excited. Then the lifetime of the excited state is no longer 1 over gamma, the lifetime is the collision time defined as a de-excitation time. And you're really interested for whatever reason, because you have the world's best theory on this object, that you have a theory what is the collision time for excited sodium with argon, with helium, kind of with all different elements. And you have an interesting theory which actually reflects how sodium in the excited state would interact with noble gases. And you really want to test your theory. Well, now what happens is the situation is simple. You have a Lorentzian, but the Lorentzian has a broadening which is the collision rate. And by carefully analyzing the wings of your Doppler profile, you find the collision rate-- as a functional of buffer gas density, as a function of noble gas, whatever you pick. That's one example. Another example would be, there may be also somewhere nontrivial shifts when the atoms collide. We'll talk about it also a little bit later. The experience-- de-excitation is one possibility. But more subtle things can happen. For instance, just a phase perturbation, or when atoms come close to each other, they feel the electric field. And the electric field causes an AC Stark shift. But by understanding the AC Stark shift or AC Stark broadening which comes along with that, you can maybe map out the interaction potential between two atoms. So people were really ingenious in trying to learn something about atomic interactions from spectroscopic information, and that was one of the few tools they had. Other questions? OK. We have covered the simple examples. And now I want to give you a more comprehensive framework called perturbation theory of spectral broadening. And in the last class, I mentioned to you already that, by using this framework, we can more deeply understand line broadening mechanisms. And one highlight actually will be that next week, using these concepts, we will actually understand that collisions cannot only cause collisional broadening, they can also cause collisional narrowing. So some things which are counterintuitive have a very clear description using this method. So I think what I'm able to do in the remaining 20 minutes is to step you through the derivation. And then we apply it to a number of interesting physical situations next week. Now, in a way, I have to apologize that what I present to you is time dependent perturbation theory again. And again with a slightly different notation, so I think I go rather quickly for the part which is just review. But repeating something is also a good thing. But then I will tell you when we go beyond what we have discussed and beyond what you may have seen in textbooks. So in other words, we do time dependent perturbation theory. We have a wave function which is in two states, A and B. And there's a time dependent perturbation v. Schrodinger's equation tells us that it's probably the interaction picture, that the rate of change of the amplitude B comes because we start in the state A and the perturbation couples from state A to B. And so we can solve it. At this point, it's not even perturbative, it's general. And we are interested in spectroscopy in the rate of a transition, because we do spectroscopy and we measure what is the population in the excited state. Because there was a rate at which atoms were transferred from the calm to the excited state. So the rate is the probability to be in the excited state per unit time. So what we are interested now is, what is the amplitude B? And what is the probability B squared to be in the excited state? And now comes perturbation theory. If we take Schrodinger's equation and we integrate it with respect to time-- so we go from B dot to B-- we are integrating here with respect to time. And in first-order perturbation theory, we assume the initial state is undepleted. And we replace the amplitude A of t by its value at time t equals 0, which is assumed to be unity. OK. So with that-- oh yeah, and this may be something. I'm not doing anything which goes beyond perturbation theory, but I'm using a slightly different formulation. Because you will see that I need it in a moment. The rate is dB squared dt. And if I take the derivative of B squared, I get B or B star times B dot. So now I'm using the perturbative-- I'm inserting this function B into this expression for the rate. And this is what I obtain. So I can take the time derivative of it, but the time derivative is only affecting the upper integral. So therefore what I get is B is the integral. B dot is the integrand. And now I get this expression, which has the product of the two matrix elements. Simple mathematics, plug and play. No new concept. But what it leads us now is, and this is what is usually not so much emphasized in perturbation theory, that everything which happens to the atom, and this is the rate at which we excite the atom, is now involving a correlation function. It involves sort of an integral over the drive field v at time t and time t prime. And this is the important concept when you want to explain and understand line broadening and such. You are driving the system with an external field. And often in perturbation theory, you assume the external field is just e to the i omega t, and this correlation function is just e to the i omega t. And it's so trivial that you don't even recognize that e to the i omega t is not the time dependence of your field. e to the i omega t is sort of the product of the field at t equals 0 and the field at time t and the field at time t prime. But if you have a more general field with lots of Fourier components, the difference between whether it's a correlation function or the field itself becomes important. In other words, I'm now telling you whenever you did perturbation theory, this is what you did. Maybe you didn't notice it, but what you had was actually the correlation function between the drive field at two different times, t and t prime. OK, so our rate is now given by the correlation function of the field. And then we so-to-speak Fourier transform it with e to the i omega 0. OK. Let's just streamline the expressions, make them look nicer. We integrate between time 0 and t. But let's now assume-- which is actually the situation for many fields of interest-- that the field is invariant against translation in time t. So therefore this correlation function does not depend on two times. It only depends on the time difference tau. Finally, because of the complex character of the Schrodinger equation, I had an expression but I had to add the complex conjugate. Remember, the rate was the derivative of B square. And the derivative of B square is B star B dot times B star dot times B. You get two terms. And this is carried forward with the complex conjugate. But if the correlation function has the proper t, that complex conjugation means you can go to negative time. e to the i omega t complex conjugate is e to the minus i omega t. That means now that we can absorb the complex conjugate by integrating not from 0 to t, but having the integral from minus t to plus t. And this will be the next step for most situations of interest. This correlation, if you drive it with the field, the field has a finite coherence time. So therefore this integral will not have any contribution when the times are longer than the coherence time. And then we can set minus t and plus t to infinity. So that will be our final expression which we will use to discuss line broadening and line shifts. But we are not yet there. We need-- so far, I've just done ordinary perturbation theory. The one extra thing is I'm stressing that when we have a product of-- when we had a matrix element squared in perturbation theory, this is really a correlation function between the [? external, ?] the drive field at two different times. We come back to that when I discuss the result. But the second thing I want to introduce now is that this framework, which I have formulated, allows me now to include the fact that different atoms in my ensemble may experience a different drive field. For instance, I gave you the example in last class, if you have Doppler broadening, you have atoms which start out at the same point. But the faster ones move faster and experience the laser field now with a different phase. So different atoms now experience the perturbation v in a different way. So what I've done here so far is I've pretty much written down Schrodinger's equation for a single particle. But now we have to do an ensemble average. So therefore I introduce now an ensemble average by just taking that expression and averaging over all atoms in the ensemble. So then I get the ensemble averaged rate. All the correlation functions we discussed, our ensemble averaged correlation functions and our final result will also have an ensemble average. OK. So this correlation function between v of 0 and v of t will go to 0 for very long times. Even the most expensive laser in the world, the electric field which is emitted now is not related to the electric field which is emitted in an hour. Because the phase relationship has been lost, and therefore the correlation function has decayed to 0. OK. So this is the ensemble average. So therefore, what I'm naturally drawn to now is that if I take this correlation function, and I know any correlation function has a characteristic time called the coherence time where it decays. And therefore I can now discuss two limiting cases. One is where the time evolution of the system is started for times much shorter than the coherence time or much longer than the coherence time. And if what I'm telling you right now reminds you of my discussion of Rabi oscillation versus Fermi's golden rule-- yes, this is actually a very analogous discussion. OK, so there are the two limiting cases. If the time is much shorter than the coherence time-- let me give you the example of an oscillating single mode field. The perturbation v of t is just oscillating with one frequency, omega. And that means if I look at the correlation function at time t and time t plus tau, it is simply the amplitude of the field squared times e to the i omega t. And now I can take this correlation function; put it into my integral, which has just disappeared from the screen; do the integration with e to the i omega 0 t. And this is the result I obtain. And of course, this is nothing else than what you have always obtained in time dependent perturbation theory with a sinusoidal field. It is this characteristic sine detuning t over detuning, which in the limit and in the limit of when you square it and go to the limit of long times, it turns into a delta function. This gives us Fermi's golden rule. And of course, it has the same behavior at short times. At short times, the probability for the atom to be in the excited state is quadratic. Quadratic is like an incipient Rabi oscillation. And in perturbation theory, we never get higher up. We just look at the beginning Rabi oscillation. So therefore, the probability is quadratic. But I'm talking about the rate, and the rate is probability divided by time. So that means the rate is linearly increasing in time. So I'm just saying this is nothing else than rewriting the physics of Rabi oscillations. OK. If the time is longer than the coherence time, then we integrate the integral, not from minus t to plus t. We can take the limits to infinity. And that means we obtain a result which is now independent of time. And that means since Wba is the rate, we retrieve a constant rate, and this is what we have done in Fermi's golden rule. So therefore, when we look at the time evolution of a system driven by an external field, the moment we look for the time evolution longer than the coherence time-- and this is where the main interest is in spectroscopy-- we have a Fermi's golden rule result that the system is excited at a constant rate. And I want to now reinterpret this rate. This rate is nothing else than the Fourier transform of a correlation function. It makes a lot of sense. You apply time-dependent magnetic fields, perturbations, fluctuating magnetic fields, whatever, vibration and noise in your lab. You just apply that to the atom, and the atom is nothing else than a little Fourier analyzer. It says, my resonance frequency is omega 0. And all of what matters for me to make a real transition is what you offer me at omega 0. And I now Fourier analyze whatever acts on me, the correlation function of the perturbation which acts on me, and I fully analyze it. And what matters for my rate to go to the excited state is the Fourier component at the resonance frequency. It's just a generalization of what we have done in perturbation theory when we assumed that we have a drive field only at one frequency. So I've written it down here for you. The rate of excitation is nothing else than the Fourier transform of this correlation function. But let me now take it one step further, which also makes a lot of sense. The Fourier transform of the correlation function-- the correlation function is the convolution of the time-dependent fields with themselves, v of t with v of t plus tau. The Fourier transform of the convolution is the product of the Fourier transform of the field itself. So therefore, I can take whatever perturbation the atom experiences in its own frame-- external fields, moving around. Whatever the atom is exposed to, I have to calculate the power spectrum of what the atom feels. And this power spectrum provides me the excitation rate. It's nothing else than Fermi's golden rule but generalized to the concept of an arbitrary spectrum of the driving field. Questions so far? You can also say that's a wonderful way to look at energy conservation. If an atom is exposed to any kind of environment, it goes from the ground to the excited state only to the extent that whatever acts on the atom has a Fourier component at the resonance frequency. And it is only the power of the fluctuating drive field at the resonance frequency which is responsible for driving the atom. And this is energy conservation. The frequency component has to be omega 0 to take the atom from the calm to the excited state. All the other frequency components take the atom to the virtual state and take it down again. They create maybe line shifts or something like this. But a real transition, a transition where the atom stays in the excited state, requires photons at the resonance frequency. And so to speak, this measures how many photons are acting on the atoms. Let me now give you one or two general features of such correlation functions which I just find very, very useful. And then I think your time is over. If we have G of w is now the spectrum of the fluctuating fields. And let's assume, yes, eventually we have a fluctuating field which is somewhere centered at the resonance frequency. After all, we use a laser, but the atom may now move around in the laser beam. The mirrors may be vibrating. So the spectrum seen by the atom is sort of broadened around the resonance frequency. And the broadening is nothing else than 1 over the coherence time of the environment. So let me just normalize the correlation function that the integral is unity. And then this is trivial but important. The value at the resonance frequency is 1 over the broadening and is therefore the coherence time. It's sort of subtle but important. If you have a normalized spectrum, the more coherent your source is, the larger is the value of the correlation function in the center. Let me do the Fourier transform. If you Fourier transform something like this, it gives an oscillating function at the resonance frequency. But let me just multiply by e to the i omega t and sort of shift everything to 0 frequency, then we would find that the temporal correlation function decays. And it decays over characteristic time tau coherence, which is nothing else than the inverse line bits of the Fourier transform. This has nothing to do with atomic physics. It's just properties of the function is Fourier transform. But now we have the situation. Our rate was the temporal correlation function times e to the i omega 0 t. The integrand here is exactly what I'm plotting here. And so if I perform the integration, at least without getting the last numerical factor, I can approach the result by the correlation function at time t equals 0 times-- if I do the integration-- times the temporal rates of this curve, so this is the correlation time tau c. So therefore, if I have, for instance, my operator is the electric field and I drive the atom with a dipole operator, what I find is the correlation function at t equals 0 is nothing else than the electric field squared, which is what we have called the Rabi frequency squared so far. But now I multiply it with a coherence time. So this result should come very naturally to you because when we have Fermi's golden rule, we have a matrix element squared times the delta function. But I've emphasized it again and again-- the delta function is representative for spectral widths for density of states. And if we have an environment which causes spectral broadening, 1 over the coherence time is nothing else than the spectral widths here. And so I've done here exactly what the delta function Fermi's golden rule asked me to do. So if I had wanted, I could have just said, look, here is Fermi's golden rule. And by interpreting Fermi's golden rule the way I just did, I could have written down this result for you right away. OK, I think time is over. And maybe the summary which I could give you now is a good starting point for our lecture on Monday. Reminder, Friday we have the midterm in this other lecture hall in the other building. We start at the normal class time. Please be there on time.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
1_Resonance_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from 100s of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Welcome to a new teaching, new lecturing of 8.421. 8.421 is an advanced course in atomic physics-- graduate level course. 8.421 is part of a new semester sequence in atomic physics. Actually, 8.421 is taken first in this sequence because we start with more basic things about light and atoms. But the cost is designed in such a way that you can start with 8.422 or 8.421. So just to get an idea, who has already taken 8.422? Should be about half the class. OK, great. So yes, you're not repeating anything. And maybe for those of you it's a little bit anticlimactic because you had all the fun. You saw all the great things which can be done with two level systems. And now in this course we sit down and I explain to you what are those two levels. What happens to those two levels in magnetic field and electric fields? How are what they modified by the lens shift and all of that? But you see how the two things are connected. I talk about some course formalities in a few moments. But let me first point out that you're interested or you're doing research in atomic physics at a really exciting time. AMO science is booming and is rapidly advancing. And a lot of it is really do to, well, of course, new insight, new ideas, new breakthrough, but also combined with technology. We have seen over the last couple of decades a major development in light sources. If I remember what lasers I have used in my PH.D. And what lasers you were using, well, there's a big difference. Big difference in performance but also big difference in reliability and convenience. But just a few systems which didn't exist a few decades ago. The Ti:sapphire laser, which has really become the workhorse of generating lots and lots of power in the infrared domain. But then it can also be frequency doubled to the visible. When I was a [INAUDIBLE] in the early '90s, people just starting to use diode lasers in atomic physics. Here you are 20 years later. We see you much more solid state lasers. And I would say even in the last 5 to 10 years there has been another, well, revolution is too strong a word but another major advance by having extremely high power fiber lasers, which are covering more and more of the spectral range. So those advanced lasers empower in the spectral range. We have seen major advances in shaping short pulses. I remember when I was a student how femtosecond lasers were that the latest-- well, they're required. The [INAUDIBLE] and femtosecond pulses could only be produced in a few laboratories in the world with a discovery of the Ti:sapphire laser in Kerr lens mode locking. This has now become standard and is even commercially available. But researchers have pushed on attosecond policies are now the frontier of the field. Well, if you have very short pulses that also opens up the possibility to go to very high intensity, you don't need so much energy per pulse. You just, if the pulse is very short, you reach a very high intensity, which is the range of terawatt. And it is now pretty standard if you focus the short pulse laser. In the focus of the short pulse laser, you create electric field strengths, which are stronger than the electric field in an atom. So therefore, the dominant electric field is the one of the laser. And then you may add [INAUDIBLE] on whatever scheme the field between the electons or the electron and the proton. So this is the generational flight. But light also wants to be control. And this is done by using cavities. A single photon would just fly by. But If you want a photo to really intimately interact with a atom-- maybe get it absorbed, immediate absorbed, immediate. If you really want to have the photon as a [INAUDIBLE] state and not just as something which flies by, you need cavities, resonators, and we have really seen peak advances in superconducting cavities as super codings in the optical regime. And cavity QED in the optical and the microwave domain have led to major advances in the series of spectacular experiment performed now with single photons. So the single photon is no longer an idealized concept for the description of life atom interaction. It has been a reality. And single photon control has advances quickly. Well you can make major advances in terms of light. Find new lasers, shorter policies, higher intensity policies, and things like this. But the other part of atomic physics-- one is light, the other one are the atoms-- we haven't invented new atoms yet. We still got stuck with the same periodic table. But we have modified the way how we can prepare and control atomic samples. A big revolution in the '90s or '80s has been the cooling of atoms that now microkelvin, nanokelvin, and with evaporative cooling, even picokelvin regime had become possible. In terms of atomic samplers, this was an evolution which took place during my time as a researcher. Atoms always mean you're the sample of individual atoms. Sometimes you started interaction when two atoms are colliding. But atomic physics was really the physics of senior particles or two particles interacting, colliding, or forming a molecule. But the moment we reach for cooling nanokelvin temperature, atoms move so slowly that they feel out each other. And that means suddenly we have a system to do many-body physics. So the event of quantum degenerate gases and many developments after that with optical lattices and lots of bells and whistles really meant that-- and this dramatic-- that atomic physics has made the transition from single and two particle physics to many body physics. And for several research groups in this end of what are called atoms, this is, of course, an important point here. Well, somewhat related to that but more generally, the precision and preparation and manipulation which atomic physics has reached with quantum systems puts now atomic physics in a leading position at the forefront of exploring new aspects of Hilbert space. One can say that Hilbert space is vast. But what is realized, this simple quantum system is only a tiny little corner of Hilbert space. And atomic physics, if I want to define it in the most abstract way, the goal is to master Hilbert space. And that means we want to harness parts of Hilbert space, which are characterized by quantum entanglement. Maybe single forms between two particles but also between many particles. And of course, this is it to a whole new frontier in quantum computation and quantum information processing. So this sort of should show you how technology, new ideas, control, and manipulation is suddenly opening up whole new scientific directions. And just to add something more recent to the list, we have now a major research direction in AMO physics dealing with cold molecules. And they're even prospects of rewritting chapters of chemistry. What happens when you do chemistry but not in the ordinary way but at nanokelvin temperature? Or what happens when you do chemistry where you have coherent control in such a way that maybe the molecules before and after the reaction are in a cool and superposition state. So in that sense, the conclusion of that introduction is atomic physics has been successful because it continues to redefine itself. And to prove the case, I can say when I predict, when I try to predict-- I didn't even try because I know it wouldn't work. But if I tried to predict 10 years ago what would be the hot topics of today, I would have failed. What happens is just breakthroughs and discoveries. And usually they happen in areas where they are not predicted. As another angle, atomic physics has seen more than its usual share of Nobel prizes in the last two decades. Maybe the price in 1989 for ion trapping in Ramsey spectroscopy. Ramsey spectroscopy is used for the generation of atomic clock. Iron trapping is a basic building block. This was sort of givenof some of the technology. But this was the only prize in the long list I'm writing down now which was given for something which was maybe invented a few decades ago. A lot of Nobel prizes are given decades after the discovery. But all the more recent Nobel Prize and this speaks for the vitality of the field, we awarded for developments which had just happened in the decade before the prize. Whether it was laser cooling just invented in the '80s. Whether it was Bose Einstein condensation observed for six years before the 2005 prize on precision spectroscopy with lasers and frequency comb. This was also a development that happened just a few years ago. And the most recent recognition for Serge Haroche and Dave Wineland is about the manipulation of individual quantum system. And this is where the highlights of this were accomplished just a few years, lets say, over the last five or 10 years. OK. Just sort of to make a general case here, I continue to be amazed how interesting ad rich the physics of simple systems are. I actually expect that there maybe even two Nobel Prizes in the near future for, pretty much, understanding the Schrodinger equation. You would say this has been done in the old days of quantum mechanics in the '20s and '30s. And of course, lots of people have been recognized. But there are two aspects of the Schrodinger equation, which hadn't been understood or which have been understood only recently. One is the aspect of entanglement and error correction. Nobody until 10 or 20 years-- nobody until [INAUDIBLE] and collaborators introduced error correction would have thought that the quantum system can [INAUDIBLE] here, but you can reestablish coherence by what is called quantum error corrections. [INAUDIBLE] properties of the simplest wording or equation for just a few-- well, [INAUDIBLE] it's for a few particles-- which we are not known or even the expert in the field would have fled and said, no, this is not possible. And another aspect of actually single particle quantum physics, which has been fully appreciated only recently is the question of [INAUDIBLE] phase and topological phase. All the [INAUDIBLE] in quantum metaphysics, which is also spilling over to atomic physics of quantum [INAUDIBLE] topologically insulate as an [INAUDIBLE] means that there are non-trivial phases-- non-trivial symmetries in the single particle Schrodinger equation. So it's just that as a case in point that the single particle Schrodinger equation a lot of people thought in the '40s and '50s. That's it. There is nothing else to do research. And now we when whole new fields emerging exploiting new aspects of the Schrodinger equation. Will there be something else of the same caliber to be discovered? 20 years ago, people would've said no. And I just gave you two examples of major new insight, which is has really changed our understanding of quantum physics. A few years ago, I served on a National Academy of Science committee trying to do the impossible to predict the future of the field. But sometimes the National Academy of Science is asked to give advice and try to provide the best [INAUDIBLE] impossible but is exciting. Of course, we didn't predict the future. But at least to the extent possible, we summarized what are the frontier areas where we see rapid development and where it would be worth investing further. And you will actually see that a number of those frontier areas are where your research happens. One is the traditional area of precision measurements. As long as atomic physics exists, one of the specialty of atomic physics is we can emphasize measurements, atomic locks, and precision measurements of fundamental concepts and all that. And that continues until the present day. It was just two weeks ago that there was a new nature paper on the really major advance in atomic locks. [INAUDIBLE] clock has reached the precision of 6 times 10 to the minus 18. It's amazing. We'll talk more about it. You really have to carefully understand and measure small changes in the black-body radiation because just the black-body radiation creates frequency shifts, which would interfere with their precision. An amazing accomplishment for the field. So precision measurements continue to be an important frontier. Of course, is there's always the aspect of metrology, determine time frequency, and other things with higher and higher [INAUDIBLE]. But there are also applications. Just one example is making atomically. Atomic physics methods can be now used if you open at home in an environment you can measure the magnetic field. So people are now talking by using atoms or artificial atoms in the form of [INAUDIBLE] senders to measure the magnetic field, biological sounds, and all that. So measurement is fundamental aspects but is also applied aspects. Well, other frontiers are, of course, you can use support ultra cold. We've talked about high intensity lasers. Ultra intense. Ultra short. Atomic physics is more and more getting involved with nano materials. Materials with blue properties. Maybe materials with negative index of the refrection, metamaterials or, in general or plus [INAUDIBLE] materials. Nano materials can help to shed light and explore new aspects of how light interacts with matter. And of course, the major frontier is the frontier of quantum information. So given all this excitement, you have many reasons to want to learn more about it. And this course is definitely a good starting point. Let me maybe tell you a little bit what is the philosophy behind the cost and what you will get. That means, of course, at the same time what you will not get. This course is meant as an systematic, basic introduction into AMO physics. It should really lead the basic foundation that when you talk about atoms to talk about light you are really an expert and you can talk about it at the most profound level. So it's important here, and this is the goal of this course, to provide enough knowledge and enough foundation for that. So it's not a cause where I just try to sample highlights of the field and provide you with a semi understanding of all this wonderful phenomena. I rather try to focus on selective basic things but then also exciting things but rather explain them thoroughly and teach you by example than teaching you the big overview. The course, if I want to characterize, is I would say it is a conservative course. It's also, r in this sense, traditional. One reason for that is MIT. The tradition we have at MIT. At MIT we have this several generations of atomic physicists who have shaped the field. And I learned atomic physics as a postdoc from Dave Pritchard, who was a graduate student of Dan Kleppner. Dan Kleppner was a graduate student from Norman Ramsey. And Norman Ramsay was a postdoc with I Rabi. And Rabi resonance is this is [? reciprocating ?] of atomic physics. The resonance is sort of what we will also focus on today and in the first week. This is, sort of, the most important concept in atomic physics to really understand the nature of resonances and all its implication. So I should say late in my life-- I was already passed 30-- when I took the first atomic physics class in my life, I took it from Dave Pritchard. And I was really, sort of, amazed about the course, which had the traditional topics but provided a lot of insight. You can teach traditional physics from the perspective of somebody who does research today. So I want to give you all connections. But at the same time, I like a lot about the traditional approach. And some of it can be traced back to Norman Ramsey. So eventually, over the last years, I was the main person who has shaped that on atomic physics course when I expanded it from one semester to two semesters. But when I created a lot of new topics, I always looked through Dan's and Dave's notes and made sure the best of what they taught, the best ideas they put the course, they still survive until the present day. So this course is a development and continuation of a longstanding tradition. I should say I have been immensely enjoyed to co-teach the course on a couple of occasion with [INAUDIBLE] and [INAUDIBLE]. And [INAUDIBLE] has made major contribution to the second part of the course and [INAUDIBLE], especially to what we will be discussing in the next few weeks. So what I think is unusual-- you won't find it in many textbooks is that we start out by discussing the phenomenon of resonance of the [? harmonic ?] oscillator. And we will emphasize for a while the classical part but then also, of course, go to the [? creating ?] the mechanical aspects of [INAUDIBLE]. Now I have to say this [INAUDIBLE] between classical and quantum mechanics is something I will emphasize again and again in the course. I can guarantee you in this course I will sometimes ask you interesting question, which challenge your intuition. And you will most likely recognize that often when your intuition goes completely wrong it happens because you believe too much or you over-interpret one aspect of quantum physics. If I then tell you, but wait a moment, now think classically. Push the classical concept further. [INAUDIBLE] the electron and the [INAUDIBLE] as an [? harmonic ?] oscillator. Regard lights [? catering ?] as [INAUDIBLE] not of a kind of mechanical [INAUDIBLE] but of a driven [? harmonic ?] oscillator. Suddenly, a lot of things which come out of quantum mechanics make much more sense. So I've often seen when I had a conflict in my understanding. And it's a [INAUDIBLE] classical and [INAUDIBLE] mechanical explanation, I've learned to trust much more the [? semi ?] classical explanation. So that's why I feel it's important to understand the classical aspects. And usually I would also say understand the means to really understand it's limits. And often I feel you can understand the phenomenon only when you have a quantum aspect, a classical aspect, and we know exactly where they overlap and where they differ. So to see even quantum mechanical objects occasionally from the classical perspective provide [INAUDIBLE] insight. So therefore, I would emphasize classical aspects. And for instance, it may come for , many of who as a surprise and you will see that next week that some aspects like the generalized [INAUDIBLE] frequency, which you all or many of you have seen for a two level system. We find it in classical resonance. Just the classic [INAUDIBLE] of [? motion ?] of a [? childes ?] [? core ?] has a generalized [INAUDIBLE] frequency. And I do feel that it is absolutely important for the understanding of concepts that you know where do the concepts emerge? Where are they? Are they already there in classical physics and [? survive ?] in quantum physics? Or is it something new, which is genuinely [INAUDIBLE]. So yes, I will teach a little bit more classical physics in [INAUDIBLE] course. But because I've seen within my own research experience that it's healthy to shape the intuition for the fuller understanding of the systems [INAUDIBLE] for. So residence is an over arching [INAUDIBLE]. But then we have to introduce our main players. The atoms come to stage. And we want to understand the electronic structure, the [? fine ?], structure, the hyperfine structure, you're going to understand what happens in magnetic, electric, and electromagnetic light fields. We want to understand in a deep [? way ?] how do atoms interact with radiation. This also leads us. There's a big difference. You would say, well, what's the difference when atoms interact with microwave and atoms interact with light. Well light or at high frequency spontaneous emission becomes important. And then you have an [INAUDIBLE]. You have an [INAUDIBLE], which couples automatically to many, many states. So that's why radiation is different from just electric and magnetic fields because of the presence of all the vacuum modes, and we'll talk a lot about it. There's one special aspect about the cost, which I don't think I've seen in textbooks in the same way. We are singling out in a rather long unit the aspect of line shape. OK, we talk a lot about an [INAUDIBLE]. But when you measure the resonance, there is a line shape. And I found it extremely insightful when I first saw Dave [INAUDIBLE] doing it in his atomic physics course to just talk about all aspects which modify a resonance from a data function from a stick diagram into a real shape. It can be [? doppled ?] up water. It can be finite lifetime [INAUDIBLE]. It can be an [INAUDIBLE] field. But there are lots of interesting effects. And By discussing them all together you gain major insight. So we discuss how [INAUDIBLE] recoil, how the velocity of atoms effect the line shape. And if you think you've understood everything, I will talk to you about in a very [? counter intunitie ?] aspect of line shapes named the [INAUDIBLE]. If you put atoms in the environment, you would say they collide. This should lead to collision [INAUDIBLE]. But there is one aspect where clinicians need to [INAUDIBLE]. And that's sort of a highlight of this chapter which really sort of shows you how actually all of those [INAUDIBLE] mechanisms are somehow connected. Finally, and this puts us more towards the end of the course, we want to understand what happens when atoms interact not just with one photon but several photons. And then we talk about multiple photon processes. I should actually say that I'm also emphasizing the multi photon process a lot. I mean, often we just simply do a transition between two levels. And there is a operator that can be [INAUDIBLE] on a [? two folder ?] operator, yes. But to understand the multi photon aspect is important. And maybe to just give you one aspect of it, when you think you do one photon physics, often, you do two photon physics. A lot of people think atoms can absorb the photon. I've never seen in my life an atom which has absorbed a photon. The photon is immediately readmitted. It's a scattering event. An atom cannot absorb the photon for good because the lifetime of the [? excited ?] state is shot. So when you think absorption is a similar [INAUDIBLE] event, there is a limitation where, yes, you're allowed to think about it. But if you get confused and it will confuse you, then you need the fact that every absorption process is actually a two photon process. Photon in and photon out. And sometimes by remembering that it's not single photons, there are always two photons involved, it helps you to avoid some pitfalls of the similar photon picture. So therefore, multi photon, yes. It's not just high intensity to photo transitions and atomic and such. It's also about the deeper understanding. How does the single photon interact with atoms? And finally, there is something which has fascinated many physicists. The question about coherence. And coherence is as fascinating as it is diverse because coherence can have as many aspects and has many implications. And I also like a lot in this traditional MIT cause that coherence is sort of singled out as a chapter. And now I'll tell you about all the different phases of coherence in this chapter and not scattered throughout the whole course. We have coherence in single atoms. The simplest one is the coherent superposition of two level, which is so simple that it's almost boring. But there is an enormous [? richness ?] when we put in a third letter. About 20 years ago, an understanding of [? three level ?] physics has really created a new frontier in the field. Let me just tell you [INAUDIBLE] words. Lasing without inversion. Electromagnetically induced resonance. Those concepts happen due to coherence between three levels. And we'll talk about that towards the end of the course. Well we have coherence within an atom between two different or three different energy levels. But we can have also coherence between the atoms. And at that point, the atoms interact not individually. They act collectively. And of course, coherence between atoms can be the coherence of many atoms in a [INAUDIBLE] where they form one big [INAUDIBLE]. But it can also be the coherence. The atoms are not coherent because they've [? formed the ?] [INAUDIBLE]. But they interact in a coherent way with light. So there's only one aspect where the atoms act coherently. They may be in different quantum states. But the interaction with the light is absolutely identical. And when it then comes to optical properties of the system, the light doesn't care if the atoms are different. The light only cares if whether the atoms interact with the light in an absolute identical way. And then you have certain [? symmetries ?] of the light [INAUDIBLE]. And these coherence between many atoms in the interaction with light needs to-- I just give you the passwords. It's responsible for the [? poses ?] of phase matching. Renew the crystal and frequency [INAUDIBLE] you want all the atoms to interact coherently. And it is also important for the [? phenomenon ?] of super radiance. I found this subject of coherence particularly fascinating. I should say it was the subject of coherence where some maybe 10 years ago, I was in a long lasting controversy with some colleagues in my field. You know, they're people like Phillips. When I met him, he's one of the smartest [? genoatomic ?] physicist and one of the fastest ones. And ideas just fly back and forth. And there was only one example where we disagreed over a long period of time where he had good, intuitive arguments, I had good intuitive arguments, and we couldn't agree. And this was related to the question when it came to warm atom amplification. You know, some coherent process, whether it is really necessary to [INAUDIBLE] our [INAUDIBLE] or whether you can get away with less, which is more the simple radiant way where the atoms are different on different states but they have an identical way to interact with light. And in the end, I could prove that certain aspect which all people thought in the field were due to the coherent nature of atoms where sort of they were due to the fact that these atoms can [INAUDIBLE] as an atom laser. It was just some form of super radiance in disguise. So anyway, you will notice some of my own interest in the chapter of coherence when I teach it. So it's something which is this face [? matching ?] and super radiance is the physics of the '50s. But a deeper understanding of it really developed when we had Bose Einstein [? comments ?] and could put some of those ideas to the test. So lets what you expect. Let's an overview over the topics. The course will have 26 lectures. And these are the topics we cover. Do you have any questions about the [INAUDIBLE] the structure of the course. There is something I'm going to say about home-work. This semester [? Ike Shaman ?] has teamed up with me. And as many of you know, [? Ike ?] is one of the real drivers of MIT x, ED x, and [? teach ?] [INAUDIBLE] learn at MIT. So he is now teaming up with me and trying to put some of the pieces online that you can have conceptional questions where you can work on. And you will et immediate feedback whether you're on the right track or not. So this is a new element, which we ant to introduce to the course. I still think there are certain problems you have to just sit down with a white piece of paper not knowing what to write and start scribbling some creations. So we'll have conventional problems. But you also want to experiment to what [INAUDIBLE] possible to use elements of new technology of [INAUDIBLE] course like that. I actually have to say I regard it as a really very interesting and Paul promising experiment to have some aspects of teaching and learning in a graduate course. When MIT does MIT x and, you know, broadcasting education to the whole world, it's much easier to think about what to do when you have a basic introduction to classical physics and yo [INAUDIBLE]. There is, sort of, a standard curriculum. A lot of questions are simple. It's pretty straightforward how you can have simple questions as multiple choice questions. But this is different. This is really a [INAUDIBLE] course in atomic physics. It's about deep and profound understanding of complicated and complex physics. I'm not sure to what extent those complexity can be broken into smaller elements, which can be put up as multiple choice questions. Probably not. But on the other hand, since MIT will never reach millions of people with a graduate course in atomic physics, the whole interest of going to the whole world and reaching the whole world is [? absent. ?] And for me, I just want to introduce this technology to increase the residential experience for you students. So for instance, videotaping, I'm not sure if these videotapes will ever be shown to a [INAUDIBLE] audience before we make them available. But the primary audience maybe people like you who have a conflict in attending a class and you want to check what was presenting in class. I also have the idea that this would be in the future. Once we have the videotapes, maybe I can tell you look at the video recording of the class. And instead of having a lecture, we'll just have a classroom discussion. So these are aspects I want to experiment. But it's sort of exciting to see how can new technology be used for a course, which is very, very different from all the other courses, which have been put online at MIT. Well then, as expected, we have some 20 minutes to start with our first topic, which is resonance. And resonance is what describes the [INAUDIBLE] for two level systems. And also, and we will touch upon this, resonances are the way hope precision measurements are made. So what is a resonance? Well, we can first look at the classical resonance. Well a resonance is something where we have some variable and it varies periodically. So in other words, yes, there is a variable, which an be anything. It can be the population of quantum state. It can be an electric field. It can be the position of an atom. It can be anything you can think about and anything you can measure. And if this variable the varies periodically, you have a resonance. Of course, the periodic variation usually requires that you drive the system. So you first drive it. And then the system oscillates. And this means now that when you drive the system-- so this maybe a free oscillation. But now you drive the system with a variable frequency. And what you then observe is you observe a peak. So the phenomenon of resonance is that you have something which can periodically vary. And when you drive it, you see peaked response when driven with a variable frequency. Yep, this is pretty basic. And I don't want to [INAUDIBLE] much more about it. But I can tell you we are interested in atomic physics in every single possible aspect of this resonance. The shape of the curve. How we can modify it. What happens when we tie it strongly? When we tie it weakly? I mean, resonance is really the language we talk atoms with. So but here I just want to give a lighthearted introduction. The first thing we want to add to the phenomenon that there is a resonance at [? a certain ?] frequency is finite damping that would mean, after the system is driven, the oscillation does not last for an infinite amount of time. And that implies that when we drive the system and look at the response as a function of frequency, it's there is a finite [INAUDIBLE] [? delta ?] [? f ?] for the driven system. And as we will see in many ways, the damping time in delta f are related by [INAUDIBLE] transform. And we usually characterize oscillators by the sharpness of the resonance. And the sharpness of the resonance is a ratio of the beats of the resonance and the frequency of the [? inverse ?] of it. So if you have an oscillator, the kilohertz and the resonance is one hertz wide. We see the resonance has a [? que ?]-- a quality factor of 1,000-- and that means you can observe a thousand oscillations before the oscillation decays away. So what is special about atomic physics here? Why do I emphasize it in the [? introduction ?] of an atomic physics course? Well, the system is that in atomic physics we often have exquisitely isolated system. An [? atomic ?] [INAUDIBLE] vacuum chamber or systems, which are prepared with all of the tools and the precision, which we have developed over decades in atomic physics. And therefore, the result is that in atomic physics our oscillators are characterized by an extremely high quality factor [? Que. ?] And let me give you an example. If we look at an optical excitation, the-- maybe let me point out [? it's ?] something you all should try when you take a class in atomic physics and even [INAUDIBLE] in atomic physics that you have a few numbers in your mind which match. So you know, every single person in this room should know what is the frequency of light. How many [? hertz ?] is-- what is the frequency of a laser? The number I usually use for those estimates is 10 to 15 [? hertz ?]. Who knows what wavelengths this laser-- 10 to the 15 [? hertz ?]-- is? Well I view some visible light. But the speed of light is 3 times 10 to the 10. So therefore, if I just use the power 10 to the 15 hertz, it has to be 300 [INAUDIBLE]. OK, so never forget that for the rest of your life. 300 nanometer is 10 to the 15 hertz. That means that most of us who are working with [INAUDIBLE], which is 600 nanometer or 800 nanometer, the frequency is more 5 times 10 to the [? 40 ?] or 3 times 10 to the [? 40 ?]. But just as a ballpark number, 10 to the 15 hertz is 300 nanometer. OK, so if we have an optical excitation and many atoms have that, what is the Q? What is the quality factor of this resonance? Well when you stabilize you laser to a vapor cell and you look at the resonance, then you observe in a vapor cell that you have room temperature Doppler [INAUDIBLE]-- we'll talk about Doppler [INAUDIBLE] later in this course-- that usually corresponds to a frequency on the order of a gigahertz. And that means that your quality factor is on the order of 10 to the 6 a [? million ?]. That's pretty good. A million oscillation. That's a very pure oscillator. But of course, you can do much better if you do Doppler free spectroscopy. [? Either ?] by having the atomic [INAUDIBLE], which is intersected at the right angle. Or even better, put the atoms in an optical [? lettuce. ?] And this is what people are now doing with the optical [? lettuce ?] clocks that they put an atom in optical [? lettuce ?] where the Doppler [INAUDIBLE] is completely eliminated. If you take a metastable level, the lifetime of the exciting state is, maybe, one second. And [INAUDIBLE] and other atoms have those metastable labels. Then you can actually get [? aligned ?] with, which is one hertz. And the Q factor is on the order of 10 to the 15. I will show you a graphic example of such an experiment in one hertz line [INAUDIBLE] of an optical transition for an optical clock experiment in the next class on Monday when I want to discuss other aspects of it. But this is one of the worlds best oscillator you can imagine. 10 to the 15. It's a mind boggling number. Well it's clear why clocks have gone atomic. Mechanical systems are actually not bad but, of course, not nearly as good. If you take quartz oscillator, well you can build pretty good clocks out of quartz oscillators. You have quality factors which vary between a few thousands and a million. The best values are reached at low temperature. And actually, even in the event of atomic clocks, quartz oscillators or sapphire oscillators still play a roll because you need, sort of, [? fly ?] [? wheels. ?] In atomic clock you may [INAUDIBLE] only every [? Ramsey ?] spectroscopy. You know, every tens of seconds you get a signal. And in between you need a fly wheel. And then clocks, which have a very high signal to an [INAUDIBLE] station but not the [INAUDIBLE] have you to [? interpret ?] the premeasurements. We see actually a [INAUDIBLE] source of mechanical systems in the form of micro-mechanical oscillators. It was only achieved in the last 2 or 3 years that micro mechanical oscillators could be cool to the actual ground state. And there's a lot of interest of coupling the emotion of the mechanical oscillator to an atomic oscillator because they have different properties and for parental computation and other explorations of [? filbert ?] space you want to have different oscillators. And, you know, combine the best of the properties. So therefore, there is a real [INAUDIBLE] in mechanical oscillators. And those micro-mechanical mechanical oscillators have often quality factors of 10 to the 5. Here, I want to show you a picture of [INAUDIBLE]. A nice one. Yeah, this is a micro fabricated device. It looks like a little mushroom. And what happens is this mushroom type structure can confine light, which travels around the parameter as a so-called whispering gallery mode. It's similar to an acoustic mode, which can travel in the dome of a [? beak ?] cathedral. That's how it was discovered. It's an amazing effect. I wish somebody would demonstrate it to me. But if you go to one of the ancient cathedrals and you're in a dome, somebody can talk in one direction, the sound can travel around, and you can hear it. There's a guided special mode, which can travel around the parameter of the dome. And here in the microscopic domain, it's light, which is confined in such [? and resonator ?]. So this is resonator for whispering gallery mode. And that can have a Q on the order of a billion. So the idea here is that you have either one of those mushrooms or a glass sphere and the light can, sort of, travel around. And this is the characteristics of this mode. Well you can go from a tiny glass sphere to astronomical dimensions. And you also find oscillators. And the Q of those oscillators is not really bad. How good is the Q of the rotation of the earth? It fulfills all of our requirements for resonance and oscillator. It's a [? parodic ?] phenomenon. [INAUDIBLE] The Earth rotates around the sun once a year. And the question is how stable is it. Well the number is 10 to the 7. It has a Q of 10 to the 7. So the precision of the rotation of the earth is better than [? one part ?] [? in a million ?]. You can also look at the rotation of [? neutron ?] star. If those [? neutron ?] star emit flashes of X-rays [INAUDIBLE] [? pulses ?] and you can measure the rotation [? of neutron ?] stars, those neutron stars have a quality factor of 10 to the 10. And of course, if [INAUDIBLE] says, if a resonance has a high quality factor, it can be used for quality research. Then everywhere the line is the more sensitive you are to tiny little changes. And you probably know this [INAUDIBLE] with a Q of 10 to the 10, well, has been used for the first, also indirect, observation of computation waves. The [INAUDIBLE] rotates with a very precise frequency. And you can measure it with one part of 10 to the 10. And people have seen that, over the years, the frequency of rotation became smaller. And you can figure out that it becomes smaller by just one [INAUDIBLE] 10 to the 10 because you're at this position. And what happens is when the [INAUDIBLE] rotates-- when a neutron star rotates-- [? it admits ?] [? computational ?] waves. And the computational wave is energy, which is taking away from the kinetic energy of the rotation. And therefore, the pulses slows down. So having an oscillator with such a high Q has an allowed researchers to find a small effect in the damping of this oscillator, which in this case were [? computation ?] waves. Of course, the story I will tell you is about very small changes of atomic oscillators, which led to the discovery of the Lamb shift and to quantum electrodynamics. But the story is the same. A high quality oscillator is the tool for discovery. OK, so we've talked about resonances. Of course, there are resonances which are useful and others which are less useful. By useful we mean they're reproducible. We can really make a measurement and [INAUDIBLE] and do it again. And that's not enough [? for ?] being useful. You also want to learn something about it. So usually, we got resonances as useful when they are connected by a theory to something we are interested in. It can either be fundamentally constance or let [? me say ?] other parameters of interest. If you want to measure the magnetic field with very high precision and you look at atomic resonance, it's only useful then [? to the ?] theory, which tells you how chief [? or the ?] [? broadening ?] of the resonance is related to magnetic fields. And this is, again, a specialty of AMO physics. We have plenty of resonances, which are useful by those standards. And if you compare to [? astrophysical ?] oscillators, or quartz oscillators, or [? fabricated ?] oscillators, in atomic physics, we have the great advantage that atoms are identical. We know when you measure the concision atomic hydrogen in Japan and Europe and in the United States, the [? venue ?] has to be the same. For other oscillators, you often don't note It. [? So ?] [? the ?] [? showcase ?] of atomic physics is the [? root back ?] constant, which is the best known-- the most accurately known-- constant in all of physics. And the reason is because it can be directly measured by performing spectroscopy and hydrogen with highly [? stipulised ?] lasers. OK, of course, the question is who's interested in all those [? teachings ?]? Why do you want to spend all of your PH.D or half of your life measuring your [INAUDIBLE] constant to, maybe, 10 times more precision? Well it depends. It's maybe not something for everybody. But there are some connoisseurs who think that every [INAUDIBLE] has provided new inside into nature. And let me just give you one example. If you measure the [INAUDIBLE] constant very precisely, you can now-- and this has become the frontier of our field-- ask the question, is their change with time a fundamental constance? So when you measure the [INAUDIBLE] constant today with 10 to the minus 15 precision and measure it again in a year, who is guaranteeing to you that you will measure the same value. So with the precision which I've just given to you in this measurement of the [INAUDIBLE] constant, people are now able to say whether the [INAUDIBLE] constant has changed 10 to the minus 15 per year. Of course of know, the age of the universe is 14 billion years. That's about 10 to the 10 years. So even the worst case is that if you would go back to the beginning of the universe and the [INAUDIBLE] constant would change to 10 the minus 15 per year, it would've changed by 10 to the minus 5 over the age of the universe. But this would be climatic because the connection shows that life would not have developed. The whole organic chemistry would have been different if some fundamental constant of nature had been different by one [INAUDIBLE] to the minus 6, 7, or 8. So [? there are very ?] extremely stringent limits how much fundamental constant could have changed through the evolution of life because life would not have been the same if fundamental constant had changed. The question, of course, is should those fundamental constant change. Well the answer is we don't know. But there is a whole research area in [? string ?] theory where they say that our universe is, sort of, just one of many possible [? minima ?] in a multi-dimensional space. And it's actually dynamic [? minimal ?] [INAUDIBLE] changes the function of time. So there are people who wouldn't be surprised if the world is not the same in the future as it is right now because the universe or whatever defines fundamental constants is changing as a function of time. So the question is will it be during your lifetime or will it even be during, maybe, your PH.D when one researcher says you know have an [INAUDIBLE]. And we find out that, yes, we measure fundamental constant using the most accurate atomic clock. And a year later, you have measured something that's just a tiny bit but significantly different. The second aspect why you should always measure things as accurately as possible and this is, sort of the, tradition [? of our ?] field. If you can measure something very accurately, do it because, yes, there maybe surprising. And for instance, when people looked at the [INAUDIBLE] with higher precision, they found [? what we ?] talk about when we talk about atoms in the magnetic field when people looked at the anomalies [INAUDIBLE] effect, the discovery of that is what nobody expected. That particles electron has a spin. Or when people saw a tiny shift in the spectrum of atomic hydrogen, it was 1,000 megahertz splitting. It was [? the Lamb ?] shift. This was the discovery of quantum electrodynamics. And we know that precision always becomes a tool. A tool to control atomic systems control quantum mechanics with more precision. For instance, if you can completely, sort of, hyper [INAUDIBLE] structure, you can prepare atoms in a certain hyper [INAUDIBLE] state. If you don't have the resolution, you can't do that. OK so we're now going to talk. Go back to the resonance. When we look at typical resonance, we have a frequency omega. A resonance frequency omega [? 0 ?]. And we measure line with [? delta ?] omega. In many cases, we will discuss in great detail the line shape is a [INAUDIBLE]. And the [INAUDIBLE] is the imaginary part of the 1 over 6 function. Omega 0 minus omega. And then there is this parameter gamma. Gamma, which appears in the [INAUDIBLE], is identical to the [? full bits ?] at half maximum. And the Q factor of a [INAUDIBLE] is omega 0 over gamma. Let me finish a few more minutes with a short note about-- we've talked about resonances. I've talked about now the two important parameters. The resonance frequency and the full [INAUDIBLE] set half maximum. How do we measure those? And there is actually sometimes a confusion. The more systematic approach is you should measure all those frequency and line [INAUDIBLE] in angular frequency units, which are technically radian per second. 2 pi per second. But since radian has more dimension, you sometimes say we measure it in inverse seconds. So this is the measurement angular frequencies. And this is different from the unit of frequencies. When we have a frequency, which is an angular frequency divided by 2 pi, frequencies are usually measured in hertz. The problem is that a hertz is always also 1 over a second. And this is where the confusion comes. So then you just point out how you can avoid the confusion. You may right an angular frequency or maybe a 0. It is 2 pie times 1 megahertz. Then you exactly know what it is. Of course, this is nothing else than six times 6.28 times 10 to the 6 second to the minus 1. But you should never say that omega 0 is 6.8 times 10 to the 6 hertz because then people don't really know and you get confused and you confuse other people if you really mean that this has a frequency of 6 times into the 6 hertz on angular frequency. So just be clean in your thinking and your homework and all that that a frequency when you mean angular frequency is 1 over second, when you mean it as a frequency, it's hertz, and this is often the clearest form to say, yes, I know where to put the two pie and I put it in explicitly. So we often in our papers report frequencies like that. Finally, there is the question about gamma. So what are the units for gamma? Well if you look at the exponential which decays, it has e to the minus i omega t. And then it has the imaginary part, gamma t. So gamma is really a temporal decay. And there is no question about frequency and angular frequency. It's not a frequency. It's not an angular frequency. It's a decay of it. So for instance, if gamma is 10 to the 4 per second, you should never say gamma is 10 to the 4 hertz. Or you should also never say gamma is 2 pi times 1.66 kilohertz. That just doesn't make any sense. Gamma is really at a damping rate. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes? I need one more minute. AUDIENCE: OK. PROFESSOR: And is there for an inverse time. The damping time associated with this camera is simply the inverse of it and in the case chosen its hundred microsecond. So just keep that in mind. Time is over. Any questions? OK, great. We meet again same place, same time, on Monday.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
17_Atomlight_Interactions_VI_and_Line_Broadening_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Are we ready? So good afternoon. Just a reminder, this week we see each other three times-- today on Wednesday and Friday in this other lecture hall for our mid-term exam. Today we will finish the big chapter on light-atom interaction. But we're not getting rid of it, because we will be transitioning to an important aspect of light-atom interaction. And these are line shifts and line broadening. So today we start the next big chapter-- line shifts and line broadening. But before I do that, we have to finish light-atom interaction. And I want to come back to the rotating wave approximation revisited. So I'll revisit the revisit of the rotating wave approximation. And sometimes when I have discussions with students after class, I realize that something which I sort of casually mentioned is either confusing or interesting for you. And there are two aspects I actually want to come back here. So several people reacted to that, but some felt it was maybe a little bit too complicated. Or others asked me about some details. So let me come back to two aspects. And I hope you find them interesting. One is when we sorted out all those terms, those [? need two ?] angular momentum selection rules. But I made sort of the innocent comment-- well, if you have omega minus omega in a time-dependent Hamiltonian, one term is responsible for absorption, one is for emission. And when more than one person asked me about it, I think many more than one person in class would like to know more about it. So therefore, let me spend the first few minutes in explaining why is a time-dependent term in the Hamiltonian with plus or minus omega, why is one of them responsible for absorption, and one is responsible for emission? Well, we have Schrodinger's equation, which says that the change of the amplitude in state one has a term. And if it started out with population in state two-- let's say perturbation theory, we start in state two-- then it is the only term where the differential equation, through an off-diagonal matrix element puts amplitude from state two into state one. So what I'm writing down here is just Schrodinger's equation. And the operator V is the drive field connecting state two to state one. And so if I just integrate this equation for a short time between time t and t plus delta t, and I'm asking, did we change the population of state one, which is now our final state? Well, then you integrate over that for time interval delta t. But now comes the point that the initial state has, in its time-dependent wave function, a vector which is e to the minus i omega 2 t. The final state, which I called one, has-- because it's a complex conjugate-- plus omega 1 t. And let's just assume we have here the proportionality to e to the i omega t. And let me just say omega can now be positive and negative. It will be part of the answer whether it should be positive or negative. Well, this integral here becomes an integral of e to the i, omega 1 minus omega 2 plus omega t integrated with time. And this is an oscillating function, where if you integrated with over time, it will average to zero unless omega is equal or at least close to the frequency difference between initial and excited state. So actually, what you encounter here is-- well, what I've derived for you here is actually, you can say energy conservation. I didn't assume it. It is built into the time evolution of the Schrodinger equation, that you can only go from state one to state two or state two to one if the drive term has a Fourier component omega, which makes up for the difference. Or I'm using different language now. If, through the drive term, you provide photons, you provide quanta of energy, where omega fulfills the equation for energy conservation. And you also see from this result, when omega 1 is higher than omega 2, omega has to be negative. When the reverse is true, omega has to be positive. So that's why I said, the e to the plus i omega t term is responsible for absorption. The e to the minus i omega t term is responsible for stimulated emission. You also see, of course-- but I stop here, because I think you've heard it often enough. If you integrate over short time delta t, this equation has to be fulfilled only to within 1 over delta t. This is sort of the energy-time uncertainty. For short times, the photon energy does not need to match exactly the energy difference. And you also realize when we think about omega is close to resonance, then e to the i omega t does absorption. But if you're in the ground state, e to the minus i omega t, leads now to a very rapid oscillation here, which is close to [? 2 ?] omega oscillation. And we've discussed that in the context of the AC stark shift, that this gives rise to the [? proxy ?] shift. We've also discussed that this term is rapidly oscillating and it's nothing else than the counter-rotating term which we usually neglect when we do the rotating wave approximation. So everything we've discussed in this context-- counter-rotating term, energy conservation, Heisenberg's uncertainty, time-energy uncertainty actually comes from this kind of formalism. Any question? Of course, if you quantize the electromagnetic field, then you don't have a drive term with e to the i omega t. You just have a and a [? degas ?] for the photons. And the question, which term absorbs a photon or creates a photon, does not exist. Because you know it whether it's a or a [? dega. ?] But you have the two choices, whether you want to use a fully quantized field with photon operators or whether you want to use the time-dependent formalism, using a semi-classical or classical field in the Schrodinger equation. The second comment I wanted to do is using the semi-classical picture, I was sort of going with you through some examples when the rotating wave approximation is necessary, when not. When do you have counter-rotating terms. And yes, everything I told you is, I think, is the best possible way how you can-- I assume, because I haven't found a better one-- the best possible way to present it and explain it to using the time-dependent electromagnetic field. But I realized after class that it may be useful to quickly state what I have said in using the photon picture. If we have circular polarization, we have, for given frequency, annihilation and creation operator. But let's assume that the mode we are considering is right-handed circularly polarized, so the operator creates a photon at frequency omega with this right-handed circular polarization. So that means now if we start from a level m and we have now light-atom interaction, the operator which annihilates a photon with circular polarization because of angular momentum conservation can only take us to a level where the magnetic quantum number is n plus 1. Well, the operator A [? dega ?] creates a photon through stimulated emission. And so this is now our two-level system. And now we should ask the question, in terms of rotating wave approximation, is necessary or not, are there counter-rotating terms? Well, the counter-rotating terms are the non-intuitive terms, where you start out in the lowest state. But now instead of absorbing a photon, you emit a photon. And the operator for emission is this one. So I can now ask, is there another term? Why don't we stick to the blue color for the photons? That there is a term which is driven by the operator A [? dega ?] circularly polarized. Well, the answer is, there may be such a term, but the state we need has now a magnetic quantum number of m minus 1 because of angular momentum selection rules. So this here is the counter-rotating term, which you may or may not neglect, depending whether you want to make the rotating wave approximation or not. So therefore, if you got a little bit confused about the different cases I considered at the end of the last lecture, then you may just summarize the many examples I gave you in the lecture just as a note which you should keep in the back of your head, namely-- let me first phrase it in words and then write it down. If you have circular polarization and angular momentum selection rules, then the counter-rotating term may require a third level and is not part of two-level physics. So if you have a situation where the third level does not exist, you do not have a counter-rotating term. However, in all situations I've encountered in the lab, this third level does exist. OK. So let me just write that down-- counter-rotating term for circularly polarized radiation requires a third level which may not exist. And then you don't have this term. But it does exist in most cases. Anyway, just an additional clarification for the topics we had on Wednesday. Any questions about that? Yes. STUDENT: Are there cases other than a spin-1/2 system where it doesn't exist? PROFESSOR: Well, I mentioned the example last class-- if you do spectroscopy of an S2P transition in the magnetic field of a neutron star. Then the same one splitting is so huge that, well, you can assume that this has been shifted so far away that it has been completely suppressed. Other than that, well, we have the trivial situation which we discussed in NMR. If you simply have spin one half, then the total number of levels is only two, because we're talking about spin up and spin down. Or I constructed, in the last class, the forbidden transition, a doublet S to doublet S state. So that's two pairs of S equals one half. And then we are missing this state to couple any counter-rotating term into the system. OK. The next subject is saturation. Now, in this chapter, I want to talk about saturation in general. I want to discuss monochromatic light but also broadband light. And I want to introduce concepts of saturation intensity of absorption cross section, certain things which I find extremely useful if I want to understand what happens when light interacts with a system. Just sort of to whet your appetite, I will sort of show you that the absorption cross section of a two-level system is independent whether you have a strong or weak transition. Some people think the cross section should be-- but there is a difference which is important between monochromatic and broadband light. But in the end, the concepts are very simple. I should say, sometimes I feel it's almost too simple to present it in class. On the other hand, if I don't present it, I can't make a few comments and guide you through. So my conclusion now is, at least for now, I show you some prepared slides, and I sort of step you through, and make a few annotations, and point out certain things. We have already partially transitioned to teach you this material through homework assignment. This week's homework assignment, which is due on Wednesday, is almost completely on saturation. And I will make a few comments where what I present you today is an extension or different from what you're learning in the homework and vice versa. Yes. If I wanted to present you saturation, power broadening, and all that in the purest form, I would just preset you with the Optical Bloch equations. We can solve them. And then we have everything we want-- a result which explains saturation and a result which explains power broadening. And you do some of it in your homework. However, what I want to show here is that saturation is actually a general feature of a two-level system if you have sort of three rates, which I will explain to you in a moment-- very similar to Einstein's A and B coefficient-- that all such systems have saturation. And then you may immediately solve the Optical Bloch creation for monochromatic radiation. But for broadband radiation, we usually don't use the Optical Bloch equations, because for infinitely broad light, there is no coherence for which we need Optical Bloch equations. If you only have the Optical Bloch equations, you have solved for saturation in one limiting case, and you don't see that the concept of saturation is much broader. So therefore, let us assume that we have a two-level system. And we have the couple two levels with a rate-- which you can think of the rate of absorption, the rate of stimulated emission. And I call the rate the unsaturated rate. In addition, there is some dissipation, some spontaneous decay described by gamma. So Ru is the unsaturated rate for absorption and for stimulated emission. Of course, you know even before you solve those equations that there must be some saturation built in. If you would look at the fraction of atoms in the excited state and you change the laser power, which means changing the unsaturated rate, things cannot shoot up forever, because you cannot put more than 100% of the population into the excited state. However, the effect that when we increase the laser power, we do upwards absorption and downward stimulation-- means you won't even get 100%. The maximum you can get is 50%. And what I'm just drawing for you is this is a phenomenon of saturation, and now we want to understand the details. So using this rate equation, we are defining-- this is now a definition-- the saturated rate is the net transfer from A to B. Because we have an absorption and stimulated emission, the net transfer is the [? unsaturated ?] rate times the population difference. And this is our saturated rate. But of course, we normalize everything for atom. So therefore, our saturated rate has a rate coefficient S times the total number of atoms, or the total population in both states. Eventually, we are interested in steady state. We can immediately solve the rate equation in terms of steady state, which is done there. And we find that for those, we can now eliminate one of the states from the equation, because we have the steady state ratio. And then we find that the saturated rate is gamma over 2 times an expression which involves a saturation parameter. So in other words, it's just almost trivial solutions of very simple equations, which describe the saturation phenomenon I outlined for you at the beginning. This solution has the two limiting cases which we want to see-- that at a very low unsaturated rate, the saturated rate is the unsaturated rate because there is no saturation. And secondly, if we would go to infinite power, the saturated rate becomes gamma over 2, because we have equalized the population between ground and excited state. One half of the atoms are in the excited state, and they dissipate or scatter light with the rate gamma. All right. Any questions? We now want to specialize it to a situation which we often encounter, namely monochromatic radiation. And for monochromatic radiation, the unsaturated rate follows-- well, I factored out something here. But it follows the normalized line shape, which is a Lorentzian. And therefore, our unsaturated rate is proportion to the laser power. But I usually like to express the laser power through a Rabi frequency or the Rabi frequency squared. So our unsaturated rate follows this Lorentzian. And on resonance, this part is one. Our rate is omega Rabi squared over comma. And the definition for the saturation parameter of one, or for the saturation intensity is that the unsaturated rate has to be gamma over 2. So therefore, by omega Rabi squared over gamma is the unsaturated rate, it should be gamma over 2 for saturation, for saturation parameter of one. So therefore, our saturation pentameter resonance is given by this expression. And if you use the previous result and apply it to this unsaturated rate, we find a saturated rate which shows now the new phenomenon of power broadening. Let me illustrate it in two ways. The saturated rate involves a saturation parameter, and the unsaturated rate is a Lorentzian. But this Lorentzian appears now in the numerator and the denominator. So it appears twice. But with a one-step manipulation, you can transform it into a single Lorentzian. But this single Lorentzian is now power-broadened. It no longer has [? width ?] of gamma over-- of the natural line with gamma. It has an additional term, and this is power broadening. STUDENT: [INAUDIBLE]? PROFESSOR: No. It's still-- the resonance is at [? 0 ?] [INAUDIBLE]. The equations are trivial. It's really just substituting one and getting from an expression, simplifying it to simple Lorentzian. I just want to emphasize the result. If you drive a transition, we have now-- the [? width ?] of the Lorentzian is now gamma over 2 if we have no saturation. But then if we crank up the saturation parameter, the [? width ?] increases with a square root of the power. That's an important result. The square root of the power leads to broadening. Now let me give you a pictorial description of what we have done here. If we start with the Lorentzian and we increase the power, you sort of want to drive the system with the stronger Lorentzian. But we know we have a ceiling, which is saturation. And of course, when you drive it stronger, you reach the ceiling on resonance earlier than you reach the ceiling when you transfer it away from resonance. So therefore, if you start with the red curve, crank up the power, you will get more of a factor, more of a result in the wings because you are not yet saturated there. And this graphical construction, which I have just sort of indicated to you, lead now to a curve which is broadened, broader than the original Lorentzian. And this is the reason behind power broadening. I want to mention one thing here. For the classroom discussion, I have assumed that the light-atom interaction can be described by Fermi's golden rule, which we know is a limitation. When the system is, in effect, incoherent or no longer coherent, we had a long discussion about Rabi oscillation, Fermi's golden rule in the last two weeks. But what I'm doing is mathematically correct. The Optical Bloch equation, which you'll use in your homework assignment, will include the transition from Rabi oscillation towards Fermi's golden rule. And I'm just considering this [? fundamental ?] case. OK. I've talked about saturation of a transition. I've mentioned that we have defined the saturation parameter such that when we have saturation parameter of one, we sort of get into the non-linear regime where saturation happens. And of course, for an experimentalist, the next question is, at what intensity does that happen? This is summarized in those equations. It's as simple as possible algebra. You just combine two equations. I don't want to do it here. And we have a result for the saturation intensity, which has two features, which I want to point out. One is [? its case ?] with omega cube. So the higher the frequency of your transition is, the harder it is to saturate. Of course, it has something to do with that in saturation, you have an unsaturated rate, which is one half of the spontaneous emission rate. And you remember that the spontaneous emission rate was proportional to omega cube. So that's why we have, again, the omega cube factor. And in addition, the larger-- actually, it depends. Sorry. I made a mistake. Well, you can write the results in several ways. If you have an intensity and you go back to photons, you get factors of omega. So when I said omega cubed comes from the natural line widths, yes, it does, but it's not the only omega factor. You can write the result actually that you have a gamma squared dependence, because one gamma comes from the matrix element squared and one comes because you need to compete in your excitation with spontaneous emission. So anyway, this is sort of the result. And you can calculate it for your favorite atom. And for alkaline atoms, we usually find that the saturation intensity is a few milliwatt per square centimeter Well, we can now repeat some of this exercise for the broadband case. In the broadband case, the unsaturated rate, which is the rate for absorption in stimulated emission, following Einstein's treatment of the AB coefficient is used by using Einstein's B coefficient times the spectral intensity. And now we want the same situation as before. We want to reach saturation. And saturation happens when this is comparable with gamma. And it's purely a definition that we say it should be gamma over 2. But we are consistent with what we did before. And if you just take this equation and calculate what the saturation intensity is, well, gamma is nothing else than the Einstein A coefficient. Here we have the Einstein B coefficient. And if we take the ratio between the Einstein A and B coefficient, the matrix element, everything which is specific to the atom, cancels out. And the saturation intensity, or the spectral density-- it's the spectral density now for broadband-- only depends on speed of light and the transition frequency cube. And it doesn't make a difference whether you have a two-level system which has a strong matrix element or weak matrix element. I could explain it to you now at this point, but we want to hold the idea that there is a difference between single mode monochromatic and broadband excitation until I have discussed one more concept. And this is the cross section. Just to check, are there any questions? Yes, Nancy. AUDIENCE: So in the broadband case, the line shift doesn't matter at all? Because in the monochromatic case, we had a line shift [INAUDIBLE]. Well, hold your question. The line shape matters. I will now discuss what is the line shape of the atom. And the quick answer is, if the atom has a line shape, we have to take the atomic line shape and do a convolution with the line of the radiation. And we have the two situations where in one case, the monochromatic light is narrower than the line shape of the atoms. In the other case, it's broader. And this difference, in the end, will be responsible for the effect that the line widths of the atom, which is the natural line widths, will cancel out in one case and not in the other. But that's actually the result of the next five minutes. Other questions? I know this topic can get confusing, because we go from one definition to the next. So let me just summarize. What I've said so far is we [? derive ?] an atom. We have absorption, we have stimulated emission. And we want to understand the phenomenon of saturation. And based on the effect how we define saturation, namely that the unsaturated rate is gamma over 2, we got some nice results for the saturation intensity and for power broadening of a Lorentzian. So it's pretty much having a definition and running with it. And now we want to express the same physics by using the concept of a cross section for the following reason. You can do physics, you can do atomic physics without ever thinking about a cross section. You can just say, I have a laser beam of a certain intensity, and I scatter light. But often, when we scatter something-- and you may be familiar, from atomic collisions-- you often want to write the scattering rate as a density times cross section times relative velocity. And this sort of has this intuitive feeling. If you have a stream of particles in your accelerator or a stream of photons in your laser beam, you can now hold onto the picture that each atom in your target is a little disk. If the particle of photons hits the disk, something happens. If it misses the disk, nothing happens. And the area of the disk is this cross section. So in other words, we want to now understand how big is the disk of the atom which will, so to speak, cast the shadow, which is synonymous with absorption, when we illuminate those atoms with laser light. So for me, a very intuitive quantity. Anyway, so all we do is we have already discussed the rate of excitation, which is now the unsaturated rate. But now we express the unsaturated rate by the density of photons times the cross section. And the relative velocity is the speed of light. And from this equation, we find-- because everything is known, we have talked about that on the last few pages-- we find that the cross section is-- and this is the result. 6 pi lambda bar square. Lambda bar is the wavelengths of light divided by 2 pi. So we find that for monochromatic radiation, the cross sectional of a two-level system is independent of the strength of the transition, independent of the matrix element. It just depends on the resonant wavelengths. Now you would say, well, but what is now the difference between a strong and a weak transition? And this is shown here. If you take your monochromatic laser and you scan it, you scan it through the cross section. When you are on resonance, you have 6 pi lambda bar square. And the difference between a narrow transition with a small Einstein A coefficient and a strong transition with a large Einstein A coefficient simply means that in one case, it's narrower. In the other case, it's wider. We talked about the phenomenon of saturation. 6 pi lambda bar squared is the cross section in the perturbative limit, or the unsaturated cross section. Of course, if you increase the laser power, you saturation the transition. The atom will have a smaller and smaller cross section. Actually, that's something important you should consider. When you have an atom and you increase the laser power, you scatter light. And the scattered light, or the absorbed light, saturates. But with the cross section, we want to know what fraction of the laser light is scattered. And the fraction of the laser light scattered goes to zero, because you make your laser light stronger and stronger. And the total amount of laser light which is scattered saturates. So in other words, you have a saturation of the scattered light. You have a saturation of the net transfer of atoms through the excited state in the limit of infinite laser power. But since the cross section is sort of normalized by the laser power, the cross section has this dependence, 1 over 1 plus saturation parameter, and goes to 0. And that means-- and this is sort of the language we use-- that the transition bleaches out. If you saturation the transition, the cross section becomes smaller. So when you saturation the transition in an absorption imaging experiment, which many of you do, the shadow is less and less black exactly because the cross section is bleaching out. But the amount of light you would scatter you would observe in fluorescence is not getting less, it saturates. This is sort of just the two flip sides of the coin. If anybody is confused, please ask a question. OK. So now in this picture, we can immediately understand why we have differences between monochromatic radiation and broadband radiation. If you want to saturation a transition with monochromatic radiation, we have our narrow laser. We absorb with a cross section 6 pi lambda bar square. And we have to increase the intensity of the laser until the excitation rate equals gamma over 2. That's our definition for saturation. So therefore, the laser intensity is proportional to gamma, because we have the cross section is constant, but the product of cross section and laser intensity has to be equal to gamma over 2. However, now consider the case that you use broadband radiation. The spectrum is completely broad. Now, if an atom has a stronger transition, its cross section is wider, and the atom can sort of absorb a wider part of the incident spectrum. So therefore, if the atom has a stronger transition, it automatically takes, absorbs more of your spectral profile. And therefore, the result for the saturation and for the spectral saturation intensity is independent of the matrix element and the strengths of the transition. In general, if you're not in either of the extreme cases of monochromatic light or broadband light, what you have to do is you have to use this cross section as a function of frequency, and convolve it to a convolution with a spectrum of the incident light. And this is exactly done here. You take your frequency-dependent cross section. You do the convolution with the spectrum of the incident light. And if you assume the incident light is spectrally very broad, you simply integrate over the Lorentzian line shape of the cross section. And then you find exactly the same result as we had two slides ago, that the saturation intensity is independent of the strengths of the transition. OK. Can you think of a very intuitive argument why for spectrally broad radiation, all the properties of the atoms cancel out? If you think about one physical example for, let's say, black body radiation-- this is spectrally broad. So you have an atom in a black body cavity. And the atom experiences a very broad spectrum. For what number of photons, black body photons per mode would we find saturation? Think about it. It's a simple criterion you can formulate for black body radiation to saturation your transition in terms of the number of photons per mode. You crank up the temperature in your cavity. How high do you have to go with the temperature in order to saturate an atom which is inside your black body cavity? AUDIENCE: One photon. PROFESSOR: Pretty close. AUDIENCE: 1 over [? degenerates. ?] PROFESSOR: [? Degenerates. ?] OK. No [? degenerates. ?] I hate [? degenerates. ?] That's your private homework to put in [? degenerates ?] afterwards. The answer I came was n equals 1/2, I think. I run the risk that I'm off by a factor of 2 now. But the argument was that-- AUDIENCE: The rate equals [? degeneracy ?] by n by gamma. So if the rate equals gamma over 1/2, that mean that [? degenerates ?] by n equals 1/2. And if [? degenerates ?] equals 1, n equals 1/2. PROFESSOR: Yes. OK. So spontaneous emission, we know that spontaneous emission-- from our derivation of spontaneous emission-- corresponds to one photon per mode. And our criterion now is that we want to have an absorption rate or stimulated rate which is gamma over 2. So we get sort of 1/2 the effect of spontaneous emission when we have 1/2 a photon per mode. So therefore, spontaneous emission absorption is proportional to n. And I think if n equals 1/2, then we have the unsaturated rates equal to gamma over 2. So this is a very physical argument. When we put an atom into a black body cavity, and we have 1/2 a photon per mode occupation number, then we saturate any atom we put in. Because using Einstein's argument, we have now the rate coefficient for absorption emission for stimulated emission and absorption is just 1/2 of the rate coefficient for spontaneous emission. And that explains that all atomic properties have to cancel out. So now question for you. We talked about the fact that if you have hyperfine transitions, that it would take-- what was the value? 1,000 years for spontaneous emission? So that we can completely neglect spontaneous emission. On the other hand, we've just learned that saturation only comes from spontaneous emission. Without spontaneous emission, we wouldn't have saturation. But now I'm telling you that any atom should really be saturated if we put it in a black body cavity where n bar is 1/2. So what is the story now if we put an atom into a black body cavity, and we are asking about, will we saturate? The hyperfine transition. Will we eventually have-- saturation means we have [BLOWS AIR], 1/4 of the atoms in the excited state, 3/4 in the ground state. So the delta n has been reduced from 1, which it was initially, to 3/4 minus 1/4, which is 1/2. What will happen? I mean, this was almost like a thermodynamic argument. Will we equilibrate and saturate hyperfine transitions in a black body cavity based on this argument that for n bar equals 1/2, we should really saturate everything? AUDIENCE: Yes, but it's going to take a long time? PROFESSOR: Yeah. So for those conditions, if your black body cavity was n bar equals 1/2, you should saturate any two-level system completely independent what gamma is. And if the gamma is 10 nanoseconds or 10,000 years, you will saturate it. The value of gamma has completely dropped out of the argument. But of course, if you want to reach any kind of equilibrium, it will take a time scale, which is 1 over gamma. And then we are back to 1,000 years. Questions? All right. Then let's conclude this chapter and start our discussion about line shifts and line broadening. I have a problem with the tablet computer. I draw a line, but the computer draws a line somewhere else. So maybe I should just go back to this one and then copy things over. OK. Our next big chapter is line shifts and broadening. So the first question is motivational. Why should we be interested in line broadening? Well, the answer is almost trivial. No resonance is infinitely narrow. Whenever we want to interpret any result we obtain spectroscopically, we are not observing a delta function, we are not observing a resonance, we are observing a line shape. And unless we understand the line shape, we may not accurately find the resonance frequency. You could, of course, assume that your line shape is symmetric, which may be the case but is not always the case. So without understanding line broadening, you cannot interpret spectroscopic information. And eventually, as I mentioned in the first chapter of this course, the art of analyzing line shapes and finding the line center is very well advanced. When we have caesium fountain clocks, the accuracy how you operate the clock as a frequency standard is on the order of one microhertz. But those fountain clocks with you toss up the atoms for one second in the atomic fountain, they fall up and down, well, like a rock, which takes about a second for a rock to go up and down a meter. So therefore, the line width is on the order of one Hertz. So therefore, people have to understand any single aspect of the line shape at the level of 10 to the minus 5, or 10 to the minus 6 to have this kind of accuracy. OK. So I thought I want to start this unit by collecting form you examples for phenomena which cause broadening and shifting of lines. And well, my list has about 10 of them. Let's see how many you get. So what phenomena can lead to line shifts and line broadening? AUDIENCE: Phonons. PROFESSOR: Phonons? In terms of-- OK, AC stark effect. Pardon? AUDIENCE: Magnetic field noise. PROFESSOR: Magnetic field noise. OK. I tried to-- yes, very good. OK, yes. Let me just try to group it a little bit further, because I want to discuss it. So we have external fields. And external fields can have AC stark shifts. If an external field is noisy, we have noise fluctuations. All right. Anything else? AUDIENCE: Doppler shift. PROFESSOR: Doppler shift. Yes. So we have the velocity of the atoms. Doppler shift. AUDIENCE: Collisions. PROFESSOR: Collisions. Very good. Well, we just talked about one thing. AUDIENCE: [INAUDIBLE]. PROFESSOR: Exactly. When we have external fields, we can have external fields like magnetic fields or electric fields which cause shift and broadening. And if there's noise, additional shifts. But when we regard those fields as drive fields, they can do power broadening. Maybe by collisions, I should add the keyboard "pressure broadening." The higher the pressure in your gas cell is, the more collisions you have and the more you have broadening. Other suggestion? If you don't have any of those effects, do you measure delta function? What's the line width? Will? STUDENT: [INAUDIBLE]. STUDENT: Spontaneous emission. PROFESSOR: Spontaneous emission. Yes. And if you don't have spontaneous emission, do we then measure delta function? STUDENT: There's a Fourier limit. PROFESSOR: The Fourier limit. You can call it observation time, or time of light broadening. If an atom flies through your laser beam and you can interrogate it only for a finite time, you have a broadening due to the Fourier theorem. And this can be called time of flight broadening and time of-- or interaction time broadening. STUDENT: Rotations and vibrations? PROFESSOR: Rotations and vibrations. Not really. These are more-- then the system has more energy levels, and that's what you want to find out. So maybe I'm more asking, how are those energy levels-- how do they appear spectroscopically? Well, I think that's pretty complete. Two external fields. If you want, you can add gravity. There is a gravitational red shift, which is general relativity. But anyway, let me look over that and try to categorize it. What we had here actually all comes from a finite observation time. Either we do not have the atom long enough in our laser beam, and that sets a limit. Or if you are interested in an excited state and the excited state decays, then the atoms themselves have terminated our interrogation time. The second class here, velocity, I would summarize that we have motion of the atom. It's a form of motional broadening. We will actually discuss, when we discuss motion, also the possibility of having atoms in a harmonic oscillator potential, ions in an ion trap. So these are now trapped particles. This will actually often give rise to a splitting of the line into side bends. So we want to discuss that. I've already mentioned external fields, conditional [INAUDIBLE] interrogation, power broadening. Some power broadening will actually result into a splitting of line into [INAUDIBLE] triplet. So power will not only broaden the line, it can also split the line. And we want to discuss that. And finally, we have the effect of atomic interactions. So for interactions, I think we should add something like mean field shifts, which also goes sometimes by the name of clock shift. If you're not at zero density, your transition can be shifted by the presence of other atoms. Will? STUDENT: Isn't collisional broadening or pressure broadening sort of just an ensemble average of a stark effect? So that's sort of an external field? PROFESSOR: That depends now. Collisions is one of the richest phenomena on the list here. You're ahead of me. But in the next few minutes, I wanted to actually see, well, maybe we should-- those categories are not mutually exclusive, because one part of collisions is. An atom is in the excited state, it collides, it gets de-excited. So then collisions have no other effect than sort of give us a finite observation time, where there is an effective lifetime, which is just the time between two collisions. So it can be this. There's another aspect of collisions, that every time there is a collision, an atom feels the electric field of another atom. And then we have some form of collisional broadening, because we do some statistical averaging over stark effects, over level shifts. Now, there is a third aspect of collisions, which is maybe surprising to many of you. And this is actually-- I put it here under motion. It is collisional narrowing, or [? diche ?] narrowing. There's one limiting case when you have collisions, that collisions lead to a narrower line and not only to a broader line. the reason is a little bit-- if you put an atom in a buffer gas and it collides with all the buffer gas atom, it cannot fly away. So buffer gas and collisions can sort of help to increase your observation time. But only if the other effects of collisions are absent. So anyway, I thought this is a number of really interesting effects. And you already see from my presentation and discussion that it makes perfect sense to discuss them not one by one, as they appear in other chapters, but try to have comprehensive discussions of those. Let me talk about one classification of those shifts and broadening. And one is the distinction between homogeneous and inhomogeneous broadening. So the picture here is that if you have-- let me just give you the cartoon picture. If you have different atoms, atom 1, atom 2, atom 3. A homogeneous broadening situation is if the line has been broadened for each atom in the same way. An inhomogeneous broadening situation is that atom 1 has a line here, atom 2 has a line here, atom 3 has a line there. And if you look at the statistical ensemble, you may find the same line widths as on the left-hand side, but the situation and the mechanism is a very different one. So the different characteristics are that here, we have a mechanism which broadens or widens the line for each atom. Whereas here, there is maybe not even any line broadening for the atom. It's more a random shift to individual atoms. And the widening happens for the ensemble. Another very important distinction is in the left case, if you have one powerful laser, it can talk to all the atoms. Whereas in the right-hand side, you may have a laser with a certain frequency, and it may only excite one group of atoms in your ensemble. So this is the opposite here. In many situations do we have a physical picture where, in homogeneous broadening, we can understand it as random interruptions of the phase's evolution of the atom. For instance, through spontaneous emission, or you can see certain collisions-- just mean the phase of the excited state is suddenly perturbed. And therefore, the phase is randomized. So if the physical picture is a random interruption of the phase evolution, well, a random interruption of a phase evolution means that there is an exponential decay of coherence. And the line shape, the Fourier transform of an exponential decay is a Lorentzian. Whereas the physical picture behind inhomogeneous broadening is that you have random perturbations. And if you have many random or small perturbations, they often follow a normal distribution, which is a Gaussian. There's one other aspect of an inhomogeneous broadening. If it's an inhomogeneous broadening, it is as if the individual atom is not broadened, the individual atom is actually sharp, it has a longer coherence. And you can-- there are techniques to make that visible. And one famous technique, for those of you who have heard about it, are an echo technique. So having explained to you in a general way the difference between inhomogeneous and homogeneous broadening, how would you classify the line broadening mechanisms we have collected before? Which one are inhomogeneous broadening? STUDENT: Doppler broadening. PROFESSOR: Doppler broadening. We exploit that when we do saturation spectroscopy in the lab, when we just talk to one component of the velocity distribution. What else? STUDENT: [INAUDIBLE]. STUDENT: Collisions. PROFESSOR: Collisions. That's actually a good one. Usually, collisions are classified as homogeneous broadening, because the simplest model for collisions is collisions are sort of just hard-core collisions which just de-excite the atom, completely change the coherent phase evolution. And therefore, collisions would broaden the transition for all atoms to a line widths which is 1 over the collision rates. However-- and this shows that the distinction cannot always be made-- you can actually have collision rate which depends on the velocity. The faster atoms may have a smaller collision cross section than the slower atoms. And now you have an inhomogeneous aspect of the collision rate. And therefore, collision rate becomes inhomogeneous. I mean, the standard example for inhomogeneous fields would-- if you have an inhomogeneous magnetic field, you have stationary atoms-- well, not in an atomic gas, but maybe in [INAUDIBLE] or in a solid. And you have an inhomogeneous magnetic field. This is actually the standard case of nuclear magnetic resonance, that each atom possesses at its local magnetic field. And the line shape is inhomogeneously broadened. Colin? STUDENT: [INAUDIBLE] clock shift sometimes, in some circumstances. PROFESSOR: If the density is constant, you would actually say the mean field is the same for all atoms in the ensemble. But if you have a trapped atom sample where the density drops at the edge, you may actually have a sharper line, and less broadening, or less shift at the edge of the cloud. Anyway, so I think you have all the tools to classify it. And you see from the discussion that sometimes it's not so obvious. Or you may have a mechanism which has [? both ?] inhomogeneous that it does something to all atoms. So for instance, collisions broaden all the atoms, but then different atoms are more broadened than others. So there may be also an inhomogeneous aspect. But finally, let me ask you the following. It seems the first items on our list had sort of a very natural explanation in terms of the Fourier theorem, that, well, we only talk to the atoms for a finite time. Or the atoms decide not to talk to us for longer, because they spontaneously decay. Now, maybe you want to give me some arguments why some of the other mechanisms are actually also due to some form of finite time of interrogation. Well, if I would say, can we regard collisions as an effect of finite observation time? Well, if I rephrase "observation time" to "finite coherence time," that something interrupts the coherent evolution of the wave function, I think we would say the collision time sets a time limit to the coherence time and therefore, should also be regarded as due to the finite time, we can drive the atom in a coherent fashion. If I take power broadening, we just discussed power broadening. Well, what is the rate-- or 1 over the rate of power broadening? We just discussed that that's maybe nice to take it out of the context. We discussed before that power-broadened line widths is gamma over 2 times S plus 1 square root of it-- the saturation parameter. But when does power broadening happening? And what is the real time scale for what is the physical-- STUDENT: Spontaneous emission. PROFESSOR: Spontaneous-- so we had a criterion that the unsaturated rate has to be comparable to gamma. Let's forget about factors of 2 now. But that means that the Rabi frequency has to be comparable to gamma. The Rabi frequency tells us a time of Rabi flopping. So actually, power broadening can be understood as a finite observation time broadening, but the atom is leaving the excited state not by spontaneous emission, but by stimulated emission. So in other words, stimulated emission interrupts our ability to observe the atoms in the excited state. And so again, we see that there is a process coming in which interrupts our observation time of the unperturbed atomic levels. Well, let me go one step further. Let me ask you, do you have any idea how we could discuss the Doppler shift as due to some finite time scale? You would say, well, yeah, that's dimensional analysis. If you have a broadening, a broadening is a frequency, 1 over the frequency is the time, and there is a time scale associated with Doppler broadening. Sure. But now my question is, what is the physical time scale with Doppler broadening? Yes? STUDENT: [? Collisions. ?] PROFESSOR: No. We have an ideal gas without any collisions-- just a [INAUDIBLE] distribution. You're right. In practice, yes. But I try to create an idealized situation. So what is the time scale of Doppler broadening? You may have never heard the question. But this is for me, what I want to really teach you when I teach all these different line shifts and line broadening. There is a common denominator. STUDENT: You could think of the atom [INAUDIBLE] emission. And then you would have velocity [INAUDIBLE] emission. PROFESSOR: You're talking about recoil shifts, and the atom is changing its velocity due to recoil. This would something in addition, but it's not necessarily the case here. I give you a physical argument. If I make the atom heavier and heavier and heavier, the effect of the recoil vanish. But then I can heat up the heavier atom, that it moves with the same velocity as the slow atom. So there is an effect which you can associate just to the velocity and to the velocity distribution. And that's what I want to discuss now. But there is another effect with the recoil. But I can say the recoil is a finite mass effect, for that purpose. The mass is sort of my handle, whether the recoil of a singular photon is important or not. Yes? STUDENT: [INAUDIBLE]? PROFESSOR: Yes, but this is really a more trivial finite observation time. When you heat the wall of the chamber, it's a collision with the chamber. It means we have only a finite interaction size. Now, let me sort of guide you to that. The secret here is when we say, you have a finite lifetime, a finite observation time, what matters when we do spectroscopy is the time we can observe the atoms coherently. If the atoms de-phase, if the atoms get out of coherence-- for instance, if you have collisions-- if collision de-excite the atom-- we'll talk about it later-- it's like spontaneous emission. But then there are collisions which just create a phase hiccup, that the excited state gets a random phase. So an interruption of the phase, an interruption of the coherent evolution is, in effect, an interruption of us probing the atoms in a coherent way. And then the Fourier transform just tells us, this time, or 1 over this time, is the line which we observe. And you would say, but how does it come into play with atoms with a velocity distribution? In the following way. If you line up several atoms and they interact with a laser beam, some atoms are faster, some atoms are slower. If some of the atoms have moved compared to the slower atoms, one additional wavelength, then your ensemble of atoms is no longer interacting with the laser beam in a phase-coherent way. Because of the different velocities, they are now talking to random phases of the laser. So therefore, Doppler broadening is nothing else as a loss of the atoms to coherently interact with a laser, because some of them have moved an additional wavelength in the laser beam. Well, if that is true-- but what happens if the laser beam is like this, with the wavelengths, and the atoms go perpendicular? What happens then? STUDENT: There's no Doppler. PROFESSOR: Then there is no Doppler effect. So what I'm saying is fully consistent with every single thing you know about the Doppler effect. OK. So I think there's not much more we can do today. But let me give you the summary of this discussion. To the best of my knowledge, all line broadening mechanisms can be described by using the concept of coherence time. And it's a coherence time of a correlation function. It's pretty much the correlation function of the phase which the atom experiences. At t equals 0, it experiences one phase of your drive field. And a later time, how long does it stay coherent with the coherent evolution of the phase of your drive field of a correlation function? However, in the case of inhomogeneous broadening-- and this is what I just discussed with the different atoms starting together and having different velocities. In the case of inhomogeneous broadening, I have to include in the description of the correlation function ensemble averaging. So this is our agenda. On Wednesday, I will start to discuss with you very simple cases. I sort of like, before I introduce correlation function, we have the generalized discussion to summarize for you the phenomenological description of just Rabi resonance, Ramsey resonance, exponential decay, simple Doppler broadening, the recoil effect, that you have a clear physical picture of what the different phenomena are. And then we describe them with a common language, with a common formalism, which is a formalism of correlation functions. Any questions? One obvious question-- the chapter on line shifts and broadening will not be on the mid-term. OK. See you on Wednesday.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
16_Atomlight_Interactions_V.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Back to cavity QED, back to the fully quantized radiation field, back to vacuum Rabi oscillation. Let me just recapitulate and sort of make the transition from this intense discussion about homework to the intellectually stimulating discussion about atoms and photons. So in the semiclassical description of the electromagnetic field, photons can only be emitted because we have a Hamiltonian with the semiclassical electric field. So if you don't drive the system with an electric field, you cannot stimulate the emission of photons. But we know this is not what happens. Photons are emitted into empty space, photons are emitted into a vacuum. And for that we needed a quantized description of the electromagnetic field. We did field quantization, and we have now our quantized Hamiltonian. And on Monday I started to discuss what is sort of the paradigmatic situation, the paradigmatic example, for how you should think about the vacuum and how you should think about emission of photons into the vacuum. And these are the vacuum Rabi oscillation described by the Jaynes-Cummings model. So the situation which I have in mind, or which you should have in mind, is an idealized situation, but it has been realized experimentally. And some of those idealized experiments were recognized with the Nobel Prize research of Haroche and Dave Wineland. So the situation is we have an atom, but it only talks to one mode of the electromagnetic field, and we make sure that the atom only talks to one mode of the electromagnetic field not by eliminating other modes; they exist. I mean, an atom can emit upwards and downwards. But we surround it with a cavity which has such a small mode volume, it has such a small volume, that the single photon Rabi frequency is huge, and therefore the emission into this one single mode dominates over the emission into all other modes. So this is a condition that the single photon Rabi frequency has to be larger than gamma. And, of course, we also have to make sure that the system is idealized so the loss of photons because of losses in the mirror, or finite reflectivity in the mirror, also has to be smaller. So that means for several Rabi periods we have a system which has only two parts, a two-level atom and one single mode of the cavity. So that's the system we have in mind, and we discussed the Hamiltonian. We saw that the Hilbert space of the atom is excited in ground state, the Hilbert space of the photons is spent by the [? flux ?] states, but what happens is-- so there's an infinite number of states, because of the infinite number of states of the photon field-- but what happens is the Hamiltonian couples only an excited state with n photons to a ground state with n plus 1 photons. So the whole Hilbert space is segmented now into just pairs of states labeled by the index n. So after so much work, we are back to a two-level system. And here is our two-level Hamiltonian. And, well, a two-level system does oscillations between the two levels. Rabi oscillations, no surprise. And this is what I want to discuss now. But the new feature is that these are really, well, these are now really two levels. Each of them is the combined state of the atom and the quantized radiation field. So now we have included in our two-level description the quantum state of the electromagnetic field. So first you should realize that this Hamiltonian is absolutely identical to spn 1/2 in magnetic fields. And you can recognize by [? comparing ?] this Hamiltonian, this matrix, to the matrices we discussed for spn 1/2 in the magnetic field, that this corresponds to the situation where this spn 1/2 had a transverse field in the x direction which caused a precession from spn up to spn down. And this x component of the field corresponds now to the single photon Rabi frequency times n plus 1. That's the off diagonal matrix element in this matrix. The thing which we have to discuss, and I will focus later, is that it depends on n. So for each pairs of state labeled by n, the photon number, we have a different off diagonal matrix element. But let's discuss first the most important and simplest case. Let's assume we are on resonance, and we want to assume that we have a vacuum. Then our Hamiltonian is simply this. And when we prepare the system in an initial state, which is an excited state with no photon in the vacuum, then we'll have oscillations to the ground state with one photon. These oscillations are exactly the oscillations we saw on the spn 1/2 system. We can just map the solution. I'm not really writing anything here. So what we obtain is the famous vacuum Rabi oscillations. where the probability to be in the excited state oscillates with the single photon Rabi frequency omega 1. I think there's a little bit of an ambiguity in language. Is it the single photon Rabi frequency? or is it the vacuum Rabi frequency? Because there's always the question about plus minus one photon because we start in the excited state without photon so you want to say it's a vacuum Rabi frequency. But then you have the ground state with one photon, and this photon is reabsorbed and then you may want to call it the one photon Rabi frequency. So I leave it to you, but it's called vacuum Rabi oscillation and this Rabi frequency is usually referred to as the one photon Rabi frequency because we obtained the Rabi frequency by calculating the electric field of a single photon. So the Rabi oscillations which we are observing now correspond to the periodic spontaneous emission and re-absorption of the same photon. There's only one photon which is spontaneously emitted and reabsorbed in a completely reversible coherent way, and the time evolution is unitary. So it's a periodic spontaneous emission and re-absorption of the same photon. This has been experimentally observed. Actually, let me back up. Experiments are done in the microwave regime. The leading groups are, well, in the older days, Dan Kleppner, Herbert [? Weidner, ?] and Serge Haroche. And this involves Rydberg atoms. Rydberg atoms in superconducting high q cavities. And those Rydberg atoms, because things scale with n and n squared, the principal quantum number, have a fantastically strong coupling to the electromagnetic field. And there is a homework assignment on Rydberg atoms in such cavities. The other example is in the optical domain. And this really involves the D line of alkali atoms. You drive them on the D line. rubidium and caesium are often used, and the work is enabled by the development of so-called supermirrors which have an extremely high reflectivity, and you can realize an excellent q factor. And the leaders in this field are Jeff Kimble and Gerhard Rempe at the Max Planck Institute. So let me just discuss an example taken from the optical domain. So the generic situation is that you have two mirrors which define a single mode cavity. Usually, you have a stream of atoms. Traditionally in atomic beams, then in some experiments in slowed atomic beams, more recently in atoms which are falling out off the mode, and only very recently single atoms with the help of other laser beams that are trapped inside the cavity. So they are streamed in such a way that only one or a few atoms are in the mode volume interact with a single mode of the cavity at a given time. And then you want to figure out what is now happening, and you have the probe laser, you send it through the cavity, and then you record the transmission with a photodiode. Yanosh? AUDIENCE: What is the mirror made of for the [INAUDIBLE]? PROFESSOR: The mirror is made of a glass substrate, but then you would [INAUDIBLE] the coating. And the mastering is really to put coatings on which are very pure, but then also I think using ion sputtering, you make sure that the coating is extremely smooth and does not have any surface irregularities which would scatter a tiny fraction of the light. I know there are some people in Ike's group and [INAUDIBLE] group who work with high q mirrors. What is a typical example for the reflectivity? Or the q factor you can reach? AUDIENCE: 5 [INAUDIBLE] and a finesse of, maybe, 500,000. And they're called superpolished mirrors. PROFESSOR: So finesse of about a million, and that means the mirrors have 99.9999% reflectivity. And the superpolishing, I think, that was the last step. People had controlled the materials, but then they found ways to make a super polish and avoid these even one part per million scattering by surface roughness. OK. So if you do that experiment, what would you expect? Well, it's a [INAUDIBLE] experiment so if you would scan the probe laser, and there is nothing in it, what you would expect is you would just expect a transmission peak at the cavity resonance. And if you tune much further, you get the next peak at the free spectral range. Let me just indicate that. So this is a case for 0 atoms in the cavity. If you put 1 photon in the cavity, you no longer-- sorry, 1 atom in the cavity, you're no longer probing a cavity, you're really probing a system, which is no longer the cavity by itself. It's an atom-photon system. It's a couple system. And we know it's described by our two-by-two Hamiltonian, and this Hamiltonian has two solutions. And the two solutions are split by the one photon Rabi frequency. So the two eigenvalues of our Hamiltonian are at plus minus omega 1 photon. So therefore, for n equals 1, we have a situation that we have two peaks split by the single photon Rabi frequency. Of course, I have assumed that great care has been spent to make sure that the cavity resonance is right where the atomic resonance is. So this is now for 1. If you have 10 atoms, remember the two-by-two Hamiltonian looks the same, but it has the square root n plus 1 factor. So neglecting the 1 roughly when we have 10 atoms in the cavity, it's square root 10 larger Rabi frequency. And therefore, we would expect that we have now a splitting of the two modes, which is square root 10 plus 1 larger. Actually, I didn't-- sorry, I have to collect myself now. I showed you that the Rabi frequency scales with square root n plus 1 in the photon field. But you should realize that everything is [? isometric ?] between photons and atoms. It's the complete coupling between photons and atoms. And if you would now look-- but I don't want to do it now-- if you would now look what happens if several atoms are present in the mode volume, you would also get a scaling which is n plus 1 in the atom number, because the atom coupled coherently. It is actually an effect of super-radiance, which we'll discussed later. So just take my word. You have the same scaling with the atom number. But I have to give you my word now, because in the experiment this is what people varied. AUDIENCE: If they had varied power or number of photons instead, like, we couldn't have drawn these same diagrams, right? Because then the top part of the Lorentzian changes. If you're changing the photon number then Lorentzians change. PROFESSOR: Say again? AUDIENCE: So, like, right now, yes, we are varying the number of atoms so we can talk about the splitting. But if we were varying the power or the number of photons instead, then each of the Lorentzians, their height would change. How would you draw this observation if you were changing the photon numbers? PROFESSOR: You know, I don't want to go into line shape. I would probably be a Lorentzian. I mean, all I want to discuss here is that we have a two-by-two Hamiltonian, which is split. And if we have one atom and one photon, it is split by the single photon Rabi frequency. If we have one atom and 10 photons, the atom can of course absorb and emit only one. As I derived on the previous page, we would have now a Rabi splitting, which is square root n plus 1, n being the number of photons. But if you would start in an empty cavity with 10 atoms in the excited state, because all the atoms are identical, they would spontaneously emit together, and then you would have 10 atoms in the ground state, and then you would have 10 photons. And so maybe this helps you. If you start with 10 atoms in the excited state, they do everything together. If you have 10 atoms in the ground state with 10 photons, and now you have 10 photons and it's clear the 10 photons lead to a Rabi frequency, which is proportional to the square root of 10 or to the square root of 11. So therefore, what you will observe is you will now observe a splitting of the single mode of the cavity which goes by the square root of n plus 1. I don't want to discuss the line shape and the [? strings, ?] I just want to sort of discuss, in a way, the eigenvalues of the Hamiltonian, and the eigenvalues are the positions of the transmission peaks with a cavity. And that has been observed. I mentioned the two leaders of the field are Gerhard Rempe and Jeff Kimble. Well, Gerhard Rempe, he did his Ph.D. In the same group at the same time as I did so I know him very well. Then he went and did post doc work with Jeff Kimble, and now is the Director of the Max Planck Institute. He has the world leading group in cavity QED. But this is sort of here the two leaders have a joint paper, which is the first observation of the vacuum Rabi splitting in an optical cavity. Of course, you can easily observe it if you have a strong atomic bean with many atoms, because then you have a good signal. And secondly, the splitting is large and easily resolved. So what they managed to do is they managed to throttle down the atomic beam that fewer and fewer atoms at the given time were in the cavity. And eventually they came down to the limit of one atom. That was an historic experiment. Of course, it's not perfect in the sense that you do not see the deep cut between the two peaks simply because, when on average you have one atom in the cavity, sometimes you have to atom in the cavity, and then you have a peak in the middle. So those experiments in those days were done only with average atom numbers and not with trapped atoms where you know for sure there's exactly one atom in the cavity. OK. So I don't show you an experiment, but let me just state that this sort of single photon Rabi flopping has been observed. You start with the cavity in the vacuum field, and you sort of see this oscillation to the ground state with one photon. But what I want to discuss now is the situation that we are not starting with an empty cavity. We are starting with a coherent field. You can also start with a thermal field so there are different experiments you can do. What would we expect now? So now the initial photon state is not the vacuum state, but the thermal state. If you have a microwave cavity and you heat it up a little bit, you have to cool it down to below 1 Kelvin. People use either helium-free chrio stats or dilution refrigerators, but if you warm it up a little bit, you have a few microwave photons in the cavity. Or, that's even more controlled, you can make the cavity ice cold, but then you inject a few photons from your synthesizer into it-- from your microwave generator-- and then you have a weak coherent field. But a thermal state or a coherent state. So what we then have is, OK, we would expect now a Rabi oscillation; however, the frequency for the Rabi flopping is now proportional to n plus 1. And we have our photon field in the superposition of flux states. So the fact that we have a superposition state implies now that the Rabi oscillations have a different oscillation frequency for the different [? tablets ?] of states labeled by n. And that leads to a dephasing. So that would mean that if you would look at the probability to be in the excited state-- just think about it. You have a wave function where the atom starts in the excited state, and the photon field is in a superposition. So now you have a two-component wave function which has different parts, and each part has a specific Rabi frequency. So you would have oscillations. Let's say there is a certain probability that the cavity is in the vacuum, and then that means that there is a component which oscillates at the vacuum Rabi oscillation frequency. But if you have a component in your coherent or thermal state which has two photons in it, then you have Rabi oscillations which are faster. And now you have to superimpose them all. And if you all superimpose them, and you find that very soon there is a damping and maybe a little bit of vigor, but you see at damping of the population in the excited state. Q [? 2 ?] dephasing. I'm just hesitating. I think I took this plot out of my notes, but I would expect now the damping should actually lead to a probability to be in the excited state of 1/2. So let me just try to correct that. So there is a little bit of oscillation, but then there is sort of a damping. And eventually, if you have only a small number of photon states, then there will be a time where you have sort of at least a partial commensurability. You have maybe five frequencies. You know, square root 5, square root 4, square root 3, square root 2. But then there is sort of a time where all these different frequencies have done an integer number of oscillations each, and then you get what is called a revival. And if you go to a large photon number, you have square root 100, square root 99, square root 88, the revival will happen at a later and later time and eventually at infinite times if you use a microscopic field. But for small coherent states, or thermal states, which only involve a few photons, you will get a revival phenomenon. And this has indeed been observed. This was actually the PhD thesis of Gerhard Rempe, and it shows the probability in the excited state. They had previously observed the Rabi oscillations at early times, but now the experiment had to be adjusted, I think by using slower atoms, to observe the longer time. And here, well, 1987 for the first time revivals have been seen. Let me dwell on that, or first are there any questions about what happens now? Atoms in the cavity to Rabi oscillations? And if the photon field is a superposition of only a few states due to this pseudo commensurability, you find times where you have revivals. I just worked out something this morning which I think is nice, because it will highlight how you should think about spontaneous emission. So let me discuss. It doesn't really matter, but I want to give you a specific example that we have a coherent state. A lot of you know what a coherent photon state is. For those who don't, it doesn't really matter for what I want to explain, and recover that in [? 8.4.22. ?] But if you have a laser or if you have a microwave generator, what comes out is a field which has a normalized amplitude of alpha, but your field is in a superposition state or [? flux ?] states. With these prefactors, I just wanted to give you an example. What I really just mean is that we have a coherent superposition of number states. We have prepared that. So now we have one atom in the excited state, it enters the cavity which has been prepared with the short pulse for a laser or microwave synthesizer in these state alpha. And now we want to discuss-- so this is at t equals 0-- and now I want to discuss what happens as a function of time. Well, we know that if you have one tablet, n, we have Rabi oscillations between the atoms in the excited state, and we have n photons. Or it has emitted the photon, and then we have n plus 1 photon and the cavity. But now, we have a superposition state, and we have amplitudes an. So I mean, that's what we get. And this includes everything. It includes everything a two-level atom does in a single mode of a cavity. And this is spontaneous emission, stimulated emission, and reabsorption. But I want to use that now to discuss with you the misconceptions about spontaneous emission. Colin? AUDIENCE: We're talking about just spontaneous emissions into the cavity? PROFESSOR: OK. I've singled out a single mode. But what happens is-- and you're just two minutes, 30 seconds, ahead of me-- that we had discussed vacuum Rabi oscillations or Rabi oscillations when we have n photons in the cavity. This was our two-level system, our Hamiltonian, and all we get is Rabi oscillations with the Rabi frequency omega n. And now we have to sort of do averaging. I'm now discussing that we have a coherent superposition of number states. Let's say, a pulse of coherent radiation, a coherent state, and this is what we get. You can now, if you want, put in a [? zillion ?] of other modes, have another sum over all the other modes you want. So I'm just doing the first step in discussing with you what will happen, but adding more and more modes will actually not change the structure of the answer and will be, of course, quantitatively a mess but conceptually not more complicated. So I want you to really look at that and realize where is the spontaneity of spontaneous emission. Where do you see any form of randomness associated with spontaneous emission in this expression? I don't see it. This is a wave function, and this time evolution is unitary. Everything is deterministic, and depending now how we choose our coefficient, there is even a revival. It's not dissipative that a photon is spontaneously emitted, and it's done. We saw in the single photon Rabi oscillation it can be reabsorbed, we saw in a slightly more complicated situation that there are at least partial revivals, and it now depends how long we wait whether revivals will take place or whether they will be complete revivals or partial revivals. But we don't need a revival in a coherent evolution, the coherent evolution can just go to a complicated wave function and it's still a single coherent wave function fully deterministically obtained form the Hamilton operator. Sometimes it pops into our eyes through a reversible oscillation or through revival, but we don't need that. So let me write it down but then explain you something. So it's unitary. There is no spontaneity at all. However, eventually we want to retrieve the classical limit. So if we would go to this situation that the average photon number is much smaller than 1, then the fluctuation in the photon field around the mean number are very small. For the coherent state the fluctuations are square root n. And then, we retrieve the limit of semiclassical Rabi flopping with the Rabi frequency omega r, which is-- I'm not consistent here with lower and uppercase [INAUDIBLE]. So it's uppercase or lowercase omega n, and this is square root n times the single photon Rabi frequency. And, of course, for a large number of photons, we can always make the approximation that we do not have to distinguish between n and n plus 1. So this is the ultimate limit if we would work in the limit of large photon numbers. So the way how you should look at it is the following. This system undergoes a time evolution to a state which is rather complicated. But if you make the number n large, this becomes approximately a state where you have simply-- you know what the rate of the semiclassical limit? In the semiclassical limit, we have a constant laser beam with constant electric field amplitude, e, and then we have driven Rabi oscillations between ground and excited state. So therefore, I don't want to show you mathematically, but in the limit of large n, you can approximate this complicated entangled wave function by the product of Rabi oscillations between ground and excited state times a coherent photon field. And the correction between what I just said and this complicated wave function is like 1 over n, because it's sort of a 1 over n approximation where we have neglected terms which the relative importance of them is 1 over n. So therefore, there are people who will say and who will tell you when we have an interaction of an atom with a coherent state, and let's just think in the number of n being large, that n times out of n plus 1, we have a coherent state. The atom does Rabi oscillation and what it does is it just emits photon into the coherent field and takes it back, like in semiclassical physics. But in one case out of n cases, or the rate 1 over n of the rate of the wave function is sort of fuzzy. It's not a coherent state; it's something much more complicated. And if you do not keep track of this complicated nature of the wave function and just do some simple measurement by, let's say, just measuring the phase of the electromagnetic wave by projecting onto a coherent state, then you would find that with the probability of n the system was just staying in a coherent state. And with a probability which is one part out of n, something else has happened, and your detector cannot capture the entanglement of that state. And this last part is what some people associate with spontaneous emission. I don't know. That's my view where the spontaneity in this process is. It's not a spontaneity in the time evolution. It's more a spontaneity if you do not care to detect this complexity, but map it back to a coherent state. And then with a precision which is 1 over n, you retrieve the semiclassical limit, but the difference between the semiclassical limit and the entangled wave function, this is what some people say is spontaneous because it's not captured by a single picture. I'm actually expecting some people to disagree with me, but this is sort of my view, what I'm sort of learning from the simple examples I've given to you. Since Ike is an expert on it, maybe, Ike, can I ask you the question is there actually a simple way to show that if I go to a large n limit that you can sort of really show that n parts out of n plus 1 is really described fully by the semiclassical limit and there is only a 1 over n fraction where we have to look at the more complicated wave function? AUDIENCE: I don't think there's a simple way to do it, but one can look at the equivalence of a [? and a factor ?] state, and they're only different by one photon number. PROFESSOR: It's sort of clear. I mean, everything is if you approximate n by n plus 1. If you don't care about the small difference, everything falls into place and is simple. But I was just wondering if one could show sort of in a more direct or more intuitive or maybe more quantitatively what is really the extra part beyond stimulated emission absorption into the coherent state. So what sort of really the nature of what people call the spontaneously emitted photon? AUDIENCE: I think that I don't the question, because I still argue that it's purely [? unitary ?] evolution even-- PROFESSOR: OK. AUDIENCE: For that system, and therefore, it's purely [INAUDIBLE] and nothing spontaneous is happening at all. PROFESSOR: OK. All right. Good. OK. Fine. What is next? I think this finishes our discussion on vacuum Rabi oscillations and revivals. I have now two topics in light atom interaction which you may not find in many textbooks, but it's my experience that they're really relevant. One is very conceptual. It's about the rotating wave approximation. And the other one is just the opposite, very technical. It's not really a new concept, but this is about saturation intensities and cross-section of an atom for absorption. The last things, cross-section for absorption and saturation intensity, that's what you need when you talk to atoms in the laboratory. These are the quantities in which we think intuitively about light atom interaction. So it's not involving any concept. I want to spend 20 minutes in introducing for you saturation, saturation parameter, cross-section, what's different between monochromatic light and broadband light. But before I do that, I have a few minutes on the rotating wave approximation. So let's call it rotating wave approximation revisited. Again, rotating wave approximation. And what I want to discuss can be discussed in the fully quantized picture, but also in the semiclassical picture. In the fully quantized picture, just a reminder, what we discussed earlier was that when we have the atomic raising and lowering operator and the photonic raising and blowing operator, we got four terms. And two of the terms are co-rotating, two are counter-rotating. But I can get exactly the same number of four terms in this semiclassical picture, and I want you to see both. But in the quantized picture, it's actually easier, because when you see a and [? a dega, ?] you know immediately one is absorption one is emission. So therefore let me explain to you what I want to tell you about the rotating wave approximation using the semiclassical picture, because then you immediately know how to apply it to the quantized picture. So what I want to bring in the here in addition to what we have discussed about light atom interaction, we had sort of a dipole Hamiltonian, is the fact that we have circular polarized light, left-handed and right-handed light, and I want to sort of use that and combine it with angular momentum selection roles, which as you remember we discussed after our discussion of dipole, quadrupole, and magnetic dipole positions. So I put now all those parts together and revisit the rotating wave approximation. So what I hope for is it tells you a little bit how selection roles, angular momentum, circular polarization, and semiclassical field which rotate in one direction how they are all connected. So I have to set up the situation by saying that we use as a quantization axis the direction k, which is either the direction of the propagation of the light beam or, in general, it's orthogonal to the polarization of the electric and magnetic field. And I can talk about an electric field driving an electric dipole transition. I can talk about a magnetic field driving a magnetic transition. It doesn't really matter. I will use [? Bsc ?] amplitude, but you can also immediately think electric dipole, and this field is linearly polarized. But I want to immediately decompose this field into right-handed and left-handed field. Or a linearly polarized field can be regarded as a superposition of a field which circulates this way plus one which circulates the other way. And ultimately, the message we will see is that if you have linearly polarized light, we always get counter-rotating term, we always have a [INAUDIBLE] shift and such. But if you use rotating fields or circularly polarized light, selection roles may actually lead to the result that there is no counter-rotating term at all. So this is eventually what I'm aiming for, and this will be the final point of the discussion. So the field which rotates in the right-handed direction where the rotating field is a superposition of x and y, or i and j. And one the rotates has a cosine omega t and one has sine omega t. I don't need to write down the left-handed part, because there's just a minus sign. Or this will actually become very handy. I do all the discussion for the right-handed part, but I can always obtain the expression for the left-handed part by replacing omega by minus omega. Which will mean that some emission process by the right-handed part will be an absorption process by the left-hand part. Be we'll see. You can change angular momentum by plus 1 by absorbing a right-handed photon or-- you'll see. We'll get there. So anyway, those signs will become important. Let me now take the above expression for the right-handed part and replace cosine omega t and sine omega t by e to the i omega t. So this was the i component, this was the j component. We divide by 2. Just to avoid confusion, I want to emphasize I've started with a real field. So I'm not using, as you often do in e and m complex field and the real fields are the real part, I have not started out by adding, you know, imaginary parts to the field. I've started out with a linearly polarized field in the x direction cosine omega t, and I've decomposed it into two real fields. One is right-handed, one is left-handed. Complex numbers only come because I want to use a complex exponential to replace cosine omega t and sine omega t. We are almost done with the decomposition of the field. I just wanted to-- we have now four terms, and I want to [? recoup ?] them. i minus imaginary unit j. i plus imaginary unit j. This is e to the i omega t. And this is e to the minus i omega t. So what have we done? Well, we've just started with linearly repolarized light, and I've rewritten the expression twice, and now we are looking only at one of the circular components, and in the end what we have is four terms. Well, that's also what we had in the fully quantized Hamiltonian, and we now want to identify what those four terms. Two will be co-rotating, two will be counter-rotating, but it's very helpful to analyze those terms. But there are two things we have to look at now. One is we have an e to the i omega t. And, well, it's probably a sign convention, but trust me, if you put that in the Schrodinger equation, it mean that you increase the energy of the atom if you drive it with e to the i omega t. You take it from a ground state to an excited state, which differs in frequency by omega, and therefore this means you have increased the energy of the system, and this corresponds to absorption. Whereas this one here means we take an atom from an excited state to a ground state, and this is the situation of stimulated emission. Remember, in selection roles we take our field and we multiply it with a dipole moment, electric, magnetic dipole moment whatever. But now we want to use also the spherical tensor decomposition of those dipole moments. It's a complicated word, but what it means is those terms are dotted with the dipole moment, and if you do it now component-by-component, we retrieve selection roles because this peaks out the-- let me just write it down-- the x plus y tensor component of the matrix element. And this corresponds to delta m equals plus 1. We change the angular momentum by one unit, and of course this term is then delta m equals minus 1. So we have done the work. What I want to do now is just map those terms into and energy level diagram. I like sort of pictorial representations, and each term becomes now a graphical [INAUDIBLE]. So let us assume we have a system, hydrogen is to p state. But let's say generally we go from a j equals 0 to j equals 1 state, which has three components. Now, I have set it up in such a way that-- oops, we need a little bit extra space. I've set it up in such a way that the states here, this is m equals 0, this is m equals plus 1, and this is m equals minus 1. So therefore-- let me just use color coding now-- this one here is delta m equals plus 1 so this one always moves to the right. It changes angular momentum by 1 so it can always move to the right, whereas the other one, delta m equals minus 1, moves to the bank. Absorption is e to the i omega t, always moves up. And stimulated emission moves down. So with that what happens is this term here transfers one unit of angular momentum and energy. So that would mean this term goes up here. It could go up here, if [? there were a ?] state. The other term-- let me use a green color-- is driving the process in the opposite direction. But now we have to also consider that you can go down here, and you can go down to a virtual state. A virtual state is just something which has the same wave function as a state, it just has an e to the i omega t, which is not-- it's a driven system. You drive it. You [INAUDIBLE] a state. You [INAUDIBLE] a state at the drive frequency, and it just means, in this case, this state has an oscillation e to the i omega t, which is very, very different from what a state which is populated would have, and this is what we call a virtual state. So in other words, what is possible is we have our three states, plus minus 1 and 0, but this is the spatial wave function including angular momentum. But we can now drive it by plus omega and minus omega, and therefore we can have it as virtual states pretty much at any energy we want. But this process here is not possible, because this would require to go to a state which has m equals 2, which does not exist. So now what I've shown here is if we would stock in the m equals 0 state, I've shown you the four terms, two are co-rotating and two are counter-rotating. If you neglect this virtual state which has a detuning of about 2 omega, or 2 resonance frequency of the atom, this is the rotating wave approximation. One term is responsible for absorption; the other term is responsible for a stimulated emission. But if I don't make the rotating wave approximation, I have those two extra terms. So this is only the right-handed light, and I want to sort of play a little bit with this concept. If I would take the left-handed light, I would add sort of four more arrows. Two more here and two more here. But let's just keep the situation as simple as possible. But I really sort of like that you write down right-handed, left-handed side, decompose it into its components, and each component is now in this diagram connected to an arrow where one direction is angular momentum, the other one is energy. So let me now talk about other energy diagrams. And this will lead to the answer. Well, can we create a situation where we have only two terms, which would be the simplest two-level system, can be directly realized without any rotating wave approximation a two-level system? So if we had two levels, which have only m equals 0 and m equals 1. So this would be the situation I just discussed with those two levels. So the only way how I can fit in this arrow is this one, and the diagonally downward arrow is that. So in this case, rotating wave approximation is not an approximation, it is exact. But some purists will actually say, hey, you can never realize that when you have an n equals plus 1 state. Then you always have an m equals minus 1 state. And then you have a virtual state down there, and then you get two more terms, which are the counter-rotating terms which I just showed above. So whole I would say if you have a neutron star which makes an infinitely high magnetic field, you can have a huge [INAUDIBLE] splitting between m equals plus 1 and m equals minus 1 and completely move one of the angular momentum states out of the picture. But, of course, in the rotating wave approximation we are neglecting off resonant terms at 2 omega, omega being an electronic excitation energy, so I'm really talking about Zeeman shifts here to eliminate the other state which may be comparable to electronic energies. So in principle, I can say this is my Hilbert space, and in this Hilbert space no rotating wave approximations is needed. But it's maybe an artificial Hilbert space. When I had a discussion with other people, we came up with the possibility of some forbidden transition. If you go from a doublet s to a doublet s state so all you have is a spin system which has 1/2 angular momentum plus 1/2 minus 1/2. And then you realize that the only way how you can fit in the orange arrow is in this way, and the green arrow in this way. So here you would have a situation where the rotating wave approximation is exact. But, of course, it's not an electric dipole transition; it's some sort of weaker conversation, which may be forbidden. I need two more minutes. I have discussed the case where we have quantized along a direction, I called it the k direction, and the polarization of the electromagnetic field was [? i and j ?] was perpendicular to it. So let me now discuss a case where we quantize along the polarization of the electromagnetic field, and you remember from our discussion on selection roles that this is pi light. So in this case, our magnetic or electric field is polarized along the i direction, and the real cosine omega t gets decomposed into e to the plus e to the minus i omega t. And we know already one term is absorption one is emission. And now, if I take my j equals 0 to j equals 1 system, pi light has a selection role of delta m equals 0. So now I have an arrow, which I want to be orange, which goes up. And a green arrow-- this is a great program. The only thing is you have to be very careful when you change color and press carefully. That's why sometimes the colors are not doing what I want. But here is green. But now, of course, with linearly polarized light we can always go down to a virtual state. We have now four terms. Two are rotating, two are counter-rotating. So therefore the quick conclusion of the last ten minutes is that there is the possibility that counter-rotating terms can be 0 for sigma plus sigma minus light due to angular momentum selection roles. But what we have also learned is if you have the m plus 1 state in there's an m minus state, if you have circularly polarized light and we drive a transition between two m states, the counter-rotating term does not come from the same set of two states, m equals 1. It involves m equals minus 1. So it's the other state which is maybe degenerate or only slightly [? split ?] by a magnetic field which is responsible for the counter-rotating terms. Anyway we have talked so much about rotating wave approximation and those terms, I just wanted to show you how it is modified if you use degeneracy p states and angular momentum. Any question? OK.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
13_Atomlight_Interactions_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. So what we are talking about is actually matrix elements. If you want to do anything interesting in atomic physics, you have to copy or induce transitions from one state to another state. Well, maybe that should be Hab. For many phenomenon, which we will cover throughout the rest of this semester, spontaneous emission, coherences, and three-level instances, and super radiance, all we need is a matrix element. And this matrix element will just run through all the equations, and be responsible for a lot of interesting phenomenon. And for most of the description of those phenomena, we don't have to know where this matrix element comes from. The only thing we have to know, there is a non-seeable matrix element which drives the process. And as you know, the matrix element, with an external field, is called the Rabi frequency. And a lot of physics just depends on the Rabi frequency. But what is behind? The engine behind the Rabi frequency is a matrix element. So in the unit I started to teach the week before spring break, we talked about matrix elements. And for H, for the Hamiltonian, we used the coupling of an atom to the electromagnetic field. And then we calculated, what is the matrix element induced by the electric field? We made the dipole approximation. And that's your plain, vanilla, generic dipole operator which can connect two states. But we also consider, what happens when we go beyond the dipole approximation, and we found extra ways of copying two states? For instance, we can copy two states which have the same parity with a quadrupole transition, or we can couple them with a magnetic dipole transition. So these are other ways to get into matrix elements. For most of the course, you don't have to understand what is behind the matrix, and you just know, there is a number which drives the process. So what I want to finish today is to discuss-- and these are called selection rules-- which tell us, when are those numbers, when is this matrix element which couples two states, when is it zero, or when is it non-vanishing. And what is helpful here is, well, as always in physics, use symmetry. And if you have an operator, let me give you examples immediately. But just think for a moment about the electric dipole. The electric dipole is the position operator R. And you want to know, can the position operator R induce a transition between two states. The way to analyse it is now in terms of symmetry. And for symmetry, which is always fulfilled for isolated atoms, is angular momentum. Angular momentum is a conserved quantum number, we have rotation symmetry. So therefore, we want to now understand matrix elements in the language of rotation symmetry. And therefore, we don't want to use a precision operator x, y, or z. x, y, z do not have the rotation symmetry. We want to use linear superpositions of x, y, and z-- I'll give you an example in a moment-- in such a way that the operator becomes an element of aesthetic of a spherical tensor. And spherical tensor, I gave you the definition in the last lecture, the element of spherical tensor, Ln, is defined by-- well, I connect it with something you know, that it transforms on the rotation, like the spherical harmonics, ylm. So it is pretty much for an operator, what the ylm, what the spherical harmonics are for wave functions. I think I can do it more formal. And Professor Schwann knows much more about it. I think these are elements of the rotational symmetry group, But I don't want to go there. So what I mean by that is the following, that if you take the position vector r, you can expand it into a basis, which is x and y. But if you use the spherical basis, x plus/minus iy. Then what appears are the spherical harmonics. So in that case, it's rather simple. The position vector has actually in this representation components which you can even see are the spherical harmonics. And therefore, we transform like the spherical harmonics. Or just to give you another example, if you have the operator, which is responsible for the quadrupole transition, well, you get the gist, it's a product of two coordinates. So therefore, it's a spherical tensor of rank two. And it so happens, but I'm not deriving that, it is a superposition of two components with Lm quantum number, 2 plus 1, 2 minus 1. So that's how we should think about it. So, we want to ask, we want to extend the operator, into operators which have rotational symmetry, and these are those, or these are those three. So instead of using the vector Cartesian coordinate, we use its spherical components. And with that, we can take this expression from the last lecture, and rewrite it using the Wigner-Eckart theorem into a way which allows us to immediately formulate selection rules. So [INAUDIBLE] and primer are the quantum numbers of the state, except for angular momentum, so matricing about principle quantum number of the hydrogen atom. And we want to copy from a total angular momentum J prime to total angular momentum J. Actually, we want to copy from J prime, M prime to a state, JM. And what the Wigner-Eckart theorem tells us that we can factor out the M dependence. The M dependence just comes from orientation in space. So M is just how you orient wave functions and vectors and space. And you can sort of write this matrix element as a projection. And this is nothing else than the familiar Clebsch-Gordan coefficient. And the Clebsch-Gordan coefficient for coupling the initial state, JM, or to-- let me put it this way-- to start with the initial state J prime, M prime, we have the L and the M of our operator. And that should result in a total angular momentum of J and M. So we retrieve again the formalism of the addition of two angular momenta. Sometimes, you have two particles you couple into angular momentum and ask, what is the total angular momentum of the composite particles? But what we do here for this selection rule, we have the initial state, we calculate with the angular momentum of the operator. You can think the operator is a field which can transfer angular momentum. And then, of course, the final state has to fulfill angular momentum conservation. But one source of the momentum is now the operator, is the external field, is the photon, or the microwave drive, whatever you apply. And yes, this Wigner-Eckart theorem allows us to write the matrix element as a reduced matrix element. Which really decides whether the transition is non-vanishing or not, times a factor which is just the orientation of the wave function and of the operator in space. So for the Clebsch-Gordan coefficient, we have a simple selection rule. And this is that for the [INAUDIBLE] number, the M of the final state has to be the M of the initial state, plus a little M of the operator. And for both the Clebsch-Gordan and the reduced matrix element, we have the triangle rule. Well, if you couple two angular momentum vectors to a final angular momentum, the three vectors have to form a triangle. And the triangle rule says that some-- let me write it down, and then you recognize it-- that the angular momentum construct by the field has to fulfill the triangle rule that J prime and J can be connected. Yes? AUDIENCE: What is this symbolic meaning of the double bars? PROFESSOR: It's just how, in many textbooks, the reduced matrix element is written. It's nothing else in a matrix element. But, you know, plus the y that I looked at in the quantum mechanics book. But what happens is these are not states. J and J prime are not states. They have an independence. So we've taken out the independence. So this is sort of a matrix element between a state which may have been stripped of its independence. So maybe, I don't know if that's 100% correct, but if you have the YLM in certain states, you have an e to the IM and M part. And this has probably been factored out. So these are not really states, and the double line just means it's a reduced matrix element with the meaning I just mentioned. It's a standard way of factorizing matrix elements. And yeah, that means reduced matrix elements. So in other words, when we talk about selection rules, we want to use the representation of spherical tensors, because the spherical tensor, the rank of the spherical tensor, just tells us how much angular momentum is involved in the photon, is involved in this transition. So maybe just to give you a question, so if I were to do a multipole expansion, and I have an octupole transition, what is now the angular momentum transferred by a photon? STUDENT: 3? PROFESSOR: What? STUDENT: 3? STUDENT: 3? PROFESSOR: 3, yeah. The dipole is L equals 1. Quadrupole is spherical tense of rank 2. L equals 2, so it's L equals 3. Now, can a photon transfer three units of angular momentum? Can an atom get rid of three units of, let's say, orbital or spin angular momentum. We start in a state which is J prime equals 3. You need one photon, and you go to a state which is J prime equals 0. Is that possible or not? We don't have [INAUDIBLE], but do you want to volunteer an answer? What's the angular momentum of the photon? STUDENT: [INAUDIBLE]? PROFESSOR: Well, be careful. The photon has an intrinsic angular momentum, which is like the spin of the photon. That's plus/minus 1. But just imagine that you have an atom, and the photon is not immediate at the origin. The photon is emitted a little bit further out. Then, with reference to the origin, the photon has orbital angular momentum. And that's what we're talking about. In the multipole expansion, we fall in powers of x, and z, and y of the spatial coordinate of the electron. And that actually means we're going away from the origin. And if you emit something which is away from the origin, you have orbital angular momentum. So, yes. An octupole transition is exactly what I said. It means a photon is emitted, and it changes the angular momentum of the atom left behind by three units. That's what we really mean by that. And that's what we mean by those electrodes. The question that you should maybe discuss after class is, what happens if you detect this photon? Is that now a supercharged photon, which has three units of angular momentum? Is there something strange in its polarization? Think about it, and if you don't find the answer, we can discuss it in the next class. OK. So this is the classification. Let's just focus on the simple examples. We have discussed electric dipole and magnetic dipole radiation. These are induced by vectors. Remember, E1 is the dipole vector. For M1, the matrix element was by the angular momentum vector. So These are vectors. And that means the representation of the spherical tensors, or the quantum numbers of the spherical tensors, are the same of the Y1n. And so for dipole radiation, whether it's electric or magnetic, we have now with the dipole selection rules, which pretty much save you at one unit of angular momentum to state B. Can you reach state A with that? And these selection rules are that you can change the angular momentum between initial state by 0 and 1. This is the triangle rule. And delta m can be 0 and plus/minus 1, depending on polarization, which we want to discuss in a moment. So in angular momentum, electric and magnetic dipoles have the same selection rule, where when it comes to the question of parity, we've already discussed that. That an electric dipole connects to a state of opposite parity, whereas, the magnetic dipole connects two states of the same parity. And of course, this comes about because L is an axial vector, and R is a polar vector, which have different symmetry when you invert the coordinate system. The one higher multipole port transition, which we discussed, was the electric quadrupole, E2. And the spherical tensor operators for the quadrupole transition, I gave you already the example of, let's say, xz, products of two coordinates, because we went one order higher than the dipole. They transform as Y2m. And therefore, we have selection rules for quadrupole transitions, which tell us now that we can change the total angular momentum up to 2. And also, delta m can change up to two units. And again, just to emphasize, because people get confused all the time. When we talk about a quadrupole transition, we mean absolutely positively a transition where one photon is emitted. If you fully quantize the field, there is one creation operator of the photon. It's one photon which is created, and this photon carries away the angular momentum we've just specified. Questions about that? Let me conclude our discussion of matrix elements by talking about something which is experimentally very relevant. And this is how selection rules depend on the polarization of light. And I only want to discuss it for electric dipole transitions. So when we wrote down the coupling of the atom to electromagnetic radiation, we had the dipole operator, but we also had, of course, the mode of the electromagnetic field, which was characterized by a polarization epsilon. So until now, when I talked about selection rules, we discussed this part. But now we want to see how it effects polarization. Well, the epsilon, for instance, for circular polarization-- we'll talk about linear polarization in a moment-- has this representation. So this is the unit vector of the polarization of the electric field when it's circularly polarization. And now remember, we take this vector r, and expand it in the following way. So if you multiply now the operator r, or the matrix elements created by this vectorate operator, by the polarization, you see that one circular polarization checks out this component. The other circular polarization projects this out. And later, we'll talk about that linear polarization projects that out. So when we said that we have matrix elements for dipole transition, which can change angular momentum, or the incremental number, by minus 1, plus 1, and 0, this is now related to the polarization of the light, either the photon which is emitted, or when we use circularly polarized light, we can only drive this transition, that or that, because the scale of product of the polarization vector and the matrix element project out only one component of the spherical tensor. So if you look at the expansion above, we realize that the left- and right-handed circular light projects now out the spherical tensor operator, T1 plus minus 1. And since it's circularly polarized light, and therefore, we find this election rule that delta m, the Z component, the Z component of the angular momentum, changes by plus/minus 1 when the circular polarized light is sigma plus or sigma minus, right-handed or left-handed. OK. So this is responsible for circular polarization. These are selection rules for circularly polarized light. Let me conclude by discussing the case of linear polarization. Well, when we ask linear polarization, if we ask for linear polarization along x or y, well, it's linear polarization, but we should regard it as the linear superposition of sigma plus and sigma minus. So in other words, if you have the quantization axis along z, and you use light which is polarized along x or y, the way how the light talks to the atom with symmetric operators is that the light is a superposition of sigma plus and sigma minus. So what we have so far is, so we had here the light key, the propagation of the light was along the z-axis. But now, we want to look at the other possibility that z, or the quantization axis, is parallel to the polarization of the electric field, which would mean that the quantization axis is usually defined by an external magnetic field. If you're talking about the situation that the electric field of the electromagnetic wave is parallel to the magnetic field, then with this polarization, we peak out this spherical tensor component, which is z, which is r times Y1,0. And that means that this polarization of the light induces a transition for which delta m equals 0. And this is referred to as pi light. So maybe if that got confusing for you, let me just help out with a drawing. We have our atom here, which is quantized by a magnetic field B. And if you shine light on it, we have the electric field perpendicular to the magnetic field. So this would be x and y. And the natural way to describe it is by using x plus/minus IY. And we have selection rules where delta m is plus/minus 1. But alternatively, we can also shine light along this direction. And for the electric field, which was perpendicular to B, we retrieve the previous case. We have superpositions of the sigma plus and sigma minus. But the new case now is that the electric field is parallel to B. And then, we drive transitions, which have delta M equals 0. So these are sigma plus and sigma minus transitions. And this here is what is called a pi transition. Anyway, it's a little bit formal, but I just wanted to present it in this context. Questions? STUDENT: I have one slightly, maybe, basic question. When we talk about polarization in all these matrix elements-- so for example, photon [INAUDIBLE], right-- these are single photons [INAUDIBLE] elements. And so when we talk about shining a laser, it has a polarization. But we don't talk about polarization for single photons. Or do we? PROFESSOR: Actually, we talk about-- the question is, what is the polarization? Do we talk about polarization of single photons, or polarization of laser beams? Well, let me back up and say, we talk about polarization of a mode of the electromagnetic field. We will always expand the electromagnetic field into modes. And the mode is the polarization. It may happen that at some point, a photon is emitting a superposition of modes. But in the most straightforward description, we always do a mode analysis. And often, we simplify the case by saying that the atom interacts only with one mode of the electromagnetic field. And maybe in the case of spontaneous emission, we then sum over all modes. But for each mode, there's a specific polarization. And it doesn't matter if this mode is filled with one atom, or with a laser beam, with a classical electromagnetic field, which corresponds to zillions of photons. STUDENT: [INAUDIBLE] does it always end up being electrical polarization in this case, then? Like because if it's many photons, then there's a lot of [INAUDIBLE] for each of them, or each of them individually-- I don't know. PROFESSOR: No, it depends. If you have an atom, and it has one unit of angular momentum, and it spontaneously emits a photon, if the photon is emitted along the quantization axis, it can only be sigma plus. If it's emitted in the other direction, it has to be sigma minus. Now if you go at strange angles, then at this angle, you overlay it with different modes. And you may now find photons in a superposition of polarizations, because we have several modes which are connected with this direction of emission. I think if you write it down, it's pretty clear. It's just sort of projection operators. And for spontaneous emission, we sum over all modes. But for me, I always think about-- we can always think about what a single photon does by saying, well, if I'm getting confused about a single photon, let me figure out what many, many, many identical photons would be. And that would mean, instead of a single photon in a certain mode, I release a beam in this mode. And then, suddenly, I can think, classically, I know what the electric field is such. And then you go back to the, what is the electric field of a single photon, and usually make the connection. So I think at least for the discussion of matrix elements, transitions, angular momentum, I don't think you ever have to distinguish between what single photons do and what laser beams do. But there are important aspects of single photons, non-classical aspects, which we'll discuss in a short while. Other questions? OK. That's all I want to say about selection rules. So with that now, we can simply take the matrix element and run with it. So in this lecture and on [INAUDIBLE], I want to talk about basic aspects of atom-light interaction. And what I want to talk today about it is the two important cases when an atom interacts with monochromatic wave, or when it interacts with a broad spectrum. In one case, when I say monochromatic case, you may just think of the best laser money can buy. Very, very sharp. Very, very monochromatic. When I talk about a broad spectrum, you may just think about black-body radiation, which is an ultra broad spectrum. And they're two very different cases. And some of it is just related to Nancy's question, that if you have a broad spectrum, we're always talking about many, many modes, and they will be incoherent, and they will be irreversible physics. Whereas for monochromatic light, everything is a pure, plain wave, and everything is coherent. So we want to sort of talk about that first. And then later this week, I think on Wednesday, we will talk about spontaneous emission. But right now, we focus on the simpler case, where we drive the system with electromagnetic radiation, which is either narrow-band or broadband. But let's just start with a cartoon. We have an atom. And for that discussion, all we need is two levels. And all we need is that the two levels are connected by some matrix element. And the basic phenomenological situation is that we have one atom, which sits in a vacuum. So we have volume, V, of vacuum. And what is important now is that the walls of the imaginary boundary of what defines our vacuum is at low temperature. And low temperature means that the atom will irreversibly decay into the ground state with a lifetime tau. And that means that in some picture, the excited state is the broadening, which is broadened by the natural lifetime. And in our discussion, we assume-- and this is what I said with the cold walls of the vacuum-- that the energy difference is much, much larger than the relevant temperature. And this is very well fulfilled for our standard atomic system. The typical excitation energy, even for atoms with loosely bound electrons, as the alkalis, is two electron volt, which corresponds to a temperature of 20,000 Kelvin. And even at the rather hot temperature, definitely hot temperatures, in The Center for Ultracold Atoms, but the KT at room temperature corresponds to 25 milli-electron volt. So therefore, when we have an atom in isolation, this is what we find. We find an atom which will irreversibly decay to the ground state. And the fact that it irreversibly decays to the ground state is really an inequality between energies. If you will talk about a hyperfine transition or something, there may be a possibility that we have an excited state, which is thermally excited. But in the following discussion, when we drive the atom, and when we look at spontaneous decay, we always assume that the thermal energies are so small, that we really assume an atom sitting in a cold vacuum. Actually, it's your next homework assignment, where you will consider, what are the effects of black-body radiation. And you will actually find out in your homework that they are non-negligible. So yes, there are corrections. But you will also find out that the corrections are rather small, or it takes a long time before black-body radiation induces any observable transition. OK. So I'll just try to be a little bit formal here. Give you sort of a sketch of an atom in a cold vacuum. Ground state is stable. Excited state, irreversibly decays. And now, we want to bring life into this situation. Now we add light. And the light-- and this is now our discussion-- has a [INAUDIBLE]. And we want to distinguish the cases of narrow-band and broadband radiation. So it's clear that if the bandwidth of the light, the only scale-- well, we have the scale of omega. But that's a huge scale. The only smaller scale, which is given by the atom, is the natural linewidth. And depending, in which case we are, we talk about narrow-band excitation and broadband radiation. And once the linewidth is much narrower than gamma, we don't get any new physics when we assume perfectly monochromatic light. So once we are much smaller, we're really discussing the case of, well, we can neglect the spectrum broadening of the light source. Or in the other case, when we have broadband light, we can pretty much make the assumption that the light is infinitely broad, and what matters is only the spectral density of the light. So in a pictorial representation, if this is the frequency omega, we have the atom with the natural linewidth gamma. Narrow-band means we are much sharper than that. And broadband means really wide distribution. So if we have broadband light, it doesn't really matter what the total power is. If the light is very broad, there can be infinite power in the wings, but the atoms don't care. What matters when we have broadband radiation is the quantity called the spectral density. And that's what we need in the following. Which is, let me just give you the units. Which is energy per volume and frequency interval. So we can talk about the spectral density as of omega. Or alternatively, when we have a propagating beam, we don't want to talk about energy, we want to talk about intensity. So it is intensity per unit frequency interval. Which would mean I of omega is the energy density, multiplied with the speed of light. And that becomes energy per area and time. So that's the flow of energy. But because we are talking about board light, it has to be normalized by the frequency interval. In contrast, monochromatic radiation, it's sort of one monochromatic electric field. And we will specify it by the single frequency, omega, and the electric field amplitude. Which when multiplied by a matrix element becomes the Rabi frequency. Or we can characterize the light by the intensity I. But then it's an intensity which has the units of energy per area time. It's not normalized to any frequency interval, because we have assumed that the frequency interval is 0. So if you now have a description how these two forms of light interact with the atom, at this point, and we come to that later this week, we have to make an assumption that we are looking at times which are much smaller than the time for spontaneous emission. So if you now, in a perturbative sense, expose the atom's monochromatic or broadband radiation, unless we have included in the description the many, many modes for spontaneous emission, we are limiting ourselves to a very short time. This is, you would say, a severe description, because atoms emit photons after a short time. But we already capture, without considering spontaneous emission, a lot of different physics. And we can nicely distinguish between features of monochromatic and features of broadband excitation. OK. So let's start out with the case of-- give me a second. OK. So if you look at the two cases, in the monochromatic case, we will discuss the idealized situation of an atom interacting only with a single mode. And what we will find out is, we will find out that now, in the optical domain, we will find actually equations for the two-level system which are identical to what we discussed earlier when we discussed spin [INAUDIBLE] in a magnetic field. So in that sense, a two level system, driven by a laser system, will behave identically to a spin driven by a magnetic field. Shouldn't come as a surprise, but I will show that to you. But I can go over that very quickly. The board-band case will actually follow from the single mode case, because what we assume is broadband means many, many modes. And then we do an averaging over many single modes by assuming random freeze. But I also want to show it to you because I picked my verbs carefully. You have many, many more things, but we assume that there is a random phase. When we talked about one photon emitted into a angle-- it maybe responds to a question earlier-- this photon may be in a coherent superposition. This is not many modes in a broadband wave. Many modes in broadband wave means that there is no correlation whatsoever between the modes, and all we will be able to talk about is an IMS value of an electric field. But anyway, the result is sort of predictable, and I wanted to tell you what I'm aiming for. But it's now really worthwhile to go through those exercises and look at what happens in perturbation theory for short times when we have monochromatic radiation, and when we have broadband radiation. So the first discussion will show Rabi flopping. I don't know how many times we have looked at Rabi oscillation. But these are now Rabi oscillations between two electronic states covered by a laser beam. And I want to show you how this comes about. And when I said strong driving, well, we have only a limited time window before spontaneous emission happens. We have to discuss the physics we want to discuss in this shot time window. And if you want to excite an atom, and see Rabi oscillation in a short time, you better have a strong laser beam. So this is why the monochromatic excitation that we discussed will pretty much automatically be in this strong coupling limit. OK. So what do we have? We have a ground,and we have an excited state. We have a matrix element. We know now where it comes from. And we have a monochromatic time dependence. In perturbation theory, we build up time-dependent [INAUDIBLE] amplitude in the excited state, because we couple the ground state with the off-diagonal matrix element to the excited state. And we have to integrate from the initial time to the final time. We have the time dependence of the electromagnetic field, and we also need the time dependence of the excited state. So when I integrate now over t prime, I take out the ground state amplitude, because we're doing perturbation theory, and we assume that for short times, needing order, the ground state amplitude is one, as prepared initially. So this in integral can be solved analytically. Some of you may remember that the minus 1 has something to do with the lower bound of the integral. And when we discuss the easy polarizability, we said, this is a transient, and we neglected it for good reasons. But now, we're really interested in the time evolution of the system, so now we have to keep it. OK. We are interested in the probability in the excited state. So we take the above expression and square it. And we find the well-known result, with sine squared, divided by omega minus omega eg. OK. So this is pretty much just straightforward, writing down an analytic expression. But now, let's discuss it. For very short times, and this is an important limiting case, the probability in the excited state is proportionate to times squared. And this is important. We're not getting a rate which is proportional to time. We're obtaining something which is time square. And the proportionality to t square means it's a fully coherent process. So whenever somebody asks you, you switch on a strong coupling from a ground to the excited state, what is the probability in the excited state? It starts out quadratically. The linear dependence-- famous golden rule, [INAUDIBLE] or such-- only come later. This is a very universal feature. And even if you use broadened light, for a time window, delta T, which is shorter than the inverse bandwidth of the light, talking about Fourier's theory, you don't have time to even figure out that your light is broad and not monochromatic. For very short times, the Fourier limit does not allow you to distinguish whether the light is broad or monochromatic. So what I just derived for you, an initial quadratic dependence, is the universal behavior of a quantum system at very short times. Because it simply says the amplitude in the excited state goes linearly in time, and the probability, quadratic. OK. So this is for a very short times. But if you look at it now for longer times, we have actually-- we'll see the atomic behavior, and these are Rabi oscillations. But there is one caveat. So we have derived. However, we have derived them only perturbatively by assuming that the ground state has always a population close to 100%, which means we have assumed that the probability in the excited state is much smaller than 1. Otherwise, we wouldn't keep the ground state. And this is only fulfilled if you inspect the solution. The solution is only self-consistent if you have an off-resonant case, where the Rabi oscillation only comes from a small fraction of the ground state population of the excited state. Of course, you all know that Rabi oscillations, this formula, is also varied on-resonance. And you can have full Rabi flopping. But I want to make a case here, distinguish carefully between monochromatic radiation and broadband radiation. For that, I need for perturbation theory. And therefore, I'm telling you what perturbation theory gives us at short times, and in terms of Rabi oscillations. STUDENT: So you're saying we assume strong coupling with respect to the atomic linewidth, but weak coupling with respect to the resonance, for instance, in [INAUDIBLE]? PROFESSOR: It's simple, but subtle. Yes. So what we have is, we assume we switch on a monochromatic laser. Since we do not include spontaneous emission, which will actually damp out Rabi oscillation-- we'll talk about that later-- we are only limited, we are limited here to short times, which are shorter than the spontaneous decay. And now, I gave you one universal thing. At very, very short times, it's always quadratic. It's a coherent process. So that's one simple, limiting, exact case you should keep in your mind. But now the question is, if you let the time go longer, something will happen. And there are several options. One is, if times go longer, spontaneous emission happens. OK. We are invalid. The other possibility is, when time gets longer, and we are on-resonance, we deplete the ground state, or [INAUDIBLE] perturbation theory doesn't deal with that. But if we are off-resonance, we can allow time to go over many Rabi periods and observe perturbative Rabi oscillations. So this is how we have formulated it. We do perturbation theory of the system without spontaneous emission. And eventually, we violate our assumptions, either because spontaneous emission kicks in, or because we deplete the ground state when we drive it too hard, or if we go too close to resonance. But the later assumption, of course, that we can't drive it hard, as you know, is artificial. We can actually discuss the monochromatic case. Not just in perturbation theory, but we can do it exactly. STUDENT: I want to go back again to-- PROFESSOR: And this is what I want to do now. But first, we can go back. STUDENT: So when we are talking about non-B resonance and B-resonance, so if we decrease the detuning, then we are getting close to resonance. So again, this gets invalid. But if we increase the detuning, we could exceed the spontaneous emission rate. So then, we won't see any Rabi oscillations again, because, at those time periods, this oscillation would [INAUDIBLE] detuning. So to observe Rabi oscillations, we have to be at times more than the detuning, or more than [INAUDIBLE] detuning. PROFESSOR: Oh, yeah. Of course. STUDENT: So the detuning has to be less than [INAUDIBLE], but more [INAUDIBLE] that we are still [INAUDIBLE] resonant. PROFESSOR: No. The detuning has to be larger than the natural linewidth, because then the Rabi oscillations are fast, and we have Rabi oscillations which are faster than any damping due to spontaneous decay. That's an image we are talking about. So in the limit of our detuning, you can detune very, very far, and you never reach the limit of our perturbative abode. STUDENT: Yes. OK. PROFESSOR: Anyway, I want to do perturbation theory of the broadband case. And the broadband case will be an incoherent sum over the single mode case. So this is why I had to bore you with, what do we get out of perturbation theory for the monochromatic case? Of course, you know already that in a two-level system, we can do it exactly. And I just want to outline it, mainly to introduce some notation. So our Hamiltonian here, which couples the ground in the excited state is given by the dipole matrix element, the electric field vector, and we call this the Rabi frequency. And then we have a sinusoidal or co-sinusoidal frequency dependence. And all I want to do is to show you that a two-level system driven by an electromagnetic field is identical to spin 1/2, which we discussed earlier, and then we are done. There is one technical or little trick we have to do, which is trivial, but I want to mention it. So if you want to compare directly with spin 1/2, we are now shifting the ground state to half the excitation frequency. In other words, just to make the key analogy with the spin, usually we say for an electronic transition, we start at 0, and we go up. But now we shift things that the zero of energy is in the middle between the ground and the excited state. And then, it looks like the excited state, we spin up, the ground state, we spin down. So with that, our Hamiltonian is now excited, excited, minus-- so all I've done is I've shifted the origin. And the coupling, using our definition of the Rabi frequency is couples ground and excited state. And excited ground state. These are the two off-diagonal matrix element. And the time dependence is cosine omega t. So we are now very close to exploit the correspondence with spin 1/2. Because after shifting the ground state energy, this is the z component of the spin operator, the [INAUDIBLE] matrix. And this here is the x component. So therefore, for driving an electronic transition with a laser beam, we have actually spin Hamiltonian, which has the standard form. So let me just write it down, because it's an important result. The Hamiltonian for driving and dipole transition with a linearly polarized laser beam corresponds, or is identical, to the Hamiltonian for spin 1/2, in a static magnetic field along the Z direction, which causes a splitting between spin up and spin down. And the splitting is now omega eg plus a linearly polarized oscillating field along the x direction. And you probably remember that when we discussed the spin problem, what we liked actually most was that we had a rotating magnetic field, because it made everything simpler. And we are doing that now by formally writing the [INAUDIBLE] polarized field as a superposition of left-handed and right-handed, or counter-rotating and co-rotating magnetic field. So let me just do that. So we have the Z part. And now, instead of having just sigma x, cosine omega t, I add sigma y, sine omega t, and I subtract sigma y, sine omega t. So now we have shown that there is something in addition to the spin problem. We discover when we had a rotating magnetic field, that we have two components here which rotate. And these are the co and counter rotating magnetic fields in the spin problem. And the counter rotating, you remember in the spin problem, we solved the problem exactly by going into a frame which rotated at the Larmor frequency, which becomes now omega eg. And the co-rotating term became stationary on resonance in this rotating frame, whereas the counter-rotating term rotates at a very high frequency in this frame at the Larmor frequency. So if this frequency, if you fulfill the inequalities that the co-rotating term is close to resonance, or in other words, we are close to resonance, and we are not using an infinite intensity of the laser beam, that we broaden everything in co and counter rotating terms of boson resonance. So if you fulfill those two conditions, then we can neglect the last term. And this is the rotating wave approximation. So in other words, in the spin problem, we can always assume we haven't circularly polarized the rotating magnetic field, and we have an exact solution. I say a little bit more about it later. But in many situations, when you excite an atom with a laser beam, you get both terms. And usually, you proceed by neglecting one term, and by making the rotating wave approximation. will, in one or two lectures, discuss whether there are situations where the counter-rotating term is exactly 0 due to angular momentum selection rules, but that's a separate discussion. In many situations, it cannot be avoided, and it's always there. It's actually always there to the point that when I talk to some colleagues and say, I can create a situation, an atom, where the counter-rotating term is exactly 0, some colleagues reacted with disbelief, and then eventually felt that the situation I created for angular momentum conservation was somewhat artificial. But we'll get there. It's an interesting discussion. But anyway, just remember that for magnetic drive, if you use a rotating magnetic field, you don't need a rotating wave approximation. Everything rotates at one frequency. But usually, when you drive a two-level system with lasers, we usually have an extra term which needs to be neglected. OK. But if you do the rotating wave vapor approximation, we have now exactly the situation we discussed for spin 1/2 in a rotating magnetic field. And then, the same equation has the same results. And then, our results for spin 1/2 are now as expected. Rabi oscillations without making any assumptions about perturbation theory. So this is an exact result for the initial conditions that we start in the ground state, and the initial population of the excited state is 0. And as usual, I have used here the generalized Rabi frequency, which is the quadrature sum of these matrix elements squared and the detuning. OK. A lot of it was to get ready for the broadband case. So that's-- yes, we have a little bit more than five minutes. So, so far, we have discussed the monochromatic case. What I really needed as a new result, because I carried over for the board-band case, was a perturbative result. But I also wanted to show you that the perturbative result is one limiting case of the exact solution, which I just derived by analogy to spin 1/2. OK. So we just had the result that in perturbation theory, for sufficiently short times, we discussed all that, that the excited state amplitude has the following dependence. So this is nothing else than-- I want to make sure you recognize it-- Rabi oscillations at the generalized Rabi frequency. The generalized Rabi frequency is simply the detuning, because it's a perturbative result. In perturbation theory, you don't get power broadening, because you assume that your drive field is perturbatively weak. So therefore, the Rabi oscillation, our now Rabi oscillation where the Rabi frequency, the generalized Rabi frequency, is delta the detuning. And this is just rewriting. Let me just scroll up. This is this result here. I wasn't commenting on it. But this is nothing else than the detuning. Look. I'm just reminding you what you get from perturbation theory. Power broadening is not part of perturbation theory. OK. So this is our perturbative result. And now, we want to integrate over that because we have a broadband distribution of the light. So what we have to use now is the energy density, W of omega. The electric field is related, the energy of the electromagnetic field, is 1/2 epsilon the energy density of the electromagnetic field, is 1/2 epsilon naught times the electric field squared. Well, if you have many modes, we add the different modes in quadrature. And we still have the same reaction between the electric field squared and the total energy. But the total energy is now an integral over d omega. We integrate over frequency over the spectral distribution of the light. So this is how we go from energy density to electric feels. But now, we want to evaluate this expression. And what appears in this expression is the Rabi frequency. Well, what we have to do now is we have to go back from the Rabi frequency. We assume linearly polarized light in the x direction to the electric field. And that means, now, that when we-- OK. We want to now take this expression, and sum it up over all modes, which means we integrate over, we write the Rabi frequency squared as an electric field squared. And the electric field squared is obtained as an integral over the spectral distribution of the light. So this means we will replace the Rabi frequency in this formula by an integral over the energy density of the radiation. We have the matrix element squared as a prefactor. I just try to re-derive it, but I think the prefactor is 2 over epsilon naught. So, yes. With that, in perturbation theory, the probability to be in the excited state is-- let's just take all of the prefactors. Now, I change the integration variable from omega to detuning, we just go from resonance-- we integrate relative to the resonance. So our energy density is now at the resonance, omega naught plus the detuning. And we have this Rabi oscillation term. OK. So this is nothing else than taking our perturbative Rabi oscillation formula, which is coherent physics, and indicate over many moles. . I'm one step away from the final result. If the energy density is flat, is broadband-- so for the extreme broadband case, we can pull that out of the integral. And then, we are left only with this function, F of t. And you can discuss this function, F of t, is a standard result. And we have seen many discussions in perturbation theory. If I plot this function, versus delta, we have something which has wiggles. Then, there is a maximum, and it has wiggles. The width here is t to the minus 1. And the amplitude is t squared. And this is the excited state amplitude squared. So if we integrate that over delta, we get something which is linear in t. Something which goes as t-square, and has a width 1/t. Yes, time is over. So the function F of t, which is under the integrand, starts out at short times, proportion to t squared, as we discussed. Maybe my drawing should reflect that. But then it becomes linear. So for long times, the function F of t becomes linear in t and the delta function in the detuning delta. This is what you have seen many times in the derivation of Fermi's golden rule. I'm running out of time now. I'll pick up the ball on Wednesday, and we'll discuss that result and put it into context. But the take-home message-- and what I really wanted to show you is that we do have coherent Rabi oscillations. And by just performing the integral over this broad spectrum of the light, we lose the Rabi oscillations, and we find rate equations, Fermi's golden rule, and excitation probability proportional to t. And we have done the transition from coherent physics to irreversible physics. This is all hidden in this one formula, but I want to fully explain it when we start on Wednesday. Any last second question about that? Cody? STUDENT: It looks like we're integrating right over the point to where perturbation theory becomes an exact, because we're integrating over delta equals 0. And that's the most important part. PROFESSOR: We are integrating over it, but we are integrating over it with the [INAUDIBLE]. So therefore, since we have-- perturbation theory remains valid, actually. Perturbation theory remains valid, as long as the excitation probability is less than one. So I have not put a scale on it, but we can go from a quadratic dependence to linear dependence. As long as the probability of being in the excited state is smaller than 1, perturbation theory is exactly valid. So I think what confuses you here is, we can do resonant excitation. The broadband includes resonant excitation. But for sufficiently short times, we reach the rate equation before we run out of [INAUDIBLE] perturbation theory.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
19_Line_Broadening_III.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let's go back to our discussion of spectral broadening. And what I started to derive for you is perturbation theory of spectral broadening, which is a very general framework. I like it, because it really provides you insight into spectral broadening. But I will also hope it provides new insight for you for time-dependent perturbation theory [INAUDIBLE]. So what we did in this lecture is we pretty much did standard type perturbation theory. We just did it by assuming that we have a general time-dependent perturbation. I'm not yet telling you what the time-dependent perturbation is. It has actually fluctuating. It has inhomogeneous. It has everything in it which will later lead to line-broadening line shifts. And the important quantity which is now describing everything we are interested is the correlation function between the perturbation at time t pi and t. And we call this correlation function G. And it is either the correlation function G of t, or it's Fourier transform, which tells us what the excitation rate of the system is. I want to give you sort of two summaries now. And they're both general to perturbation theory. But I have to say, I myself learned something about time evolution of quantum systems from those examples. So what we get is a very generic thing, that when we look at the probability to be in the final state, the amplitude V of t squared, we start out quadratic, a very general behavior of any quantum system, because your linear equation, which linearly puts amplitude into the excited state is quadratic. And also, this is the beginning of a Rabi oscillation. The Rabi oscillation starts out with 0 slope. But then you have some de-coherence type. The fields are no longer driving the system in a coherent way. And then that's where we enter the regime of Fermi's golden rule, the probability to be in the excited state only goes linear. And this is when we have rate equations. Let me sort of show how it comes about in equations, which I think is really nice. The differential equations is that in a time delta t, we add amplitude delta t to the excited state. And so the amplitude we build up is-- I may have called an h bar. Or maybe I measured [INAUDIBLE] frequency units. It's just the matrix element times delta t. And we usually call the matrix element the Rabi frequency over 2. So you should think about it, that we add sort of this amplitude. And as long as we are coherent, I have to take the amplitude 1/2 the Rabi frequency delta, then square it. And here, you get the quadratic behavior. This behavior you also have constructive interference. All those delta [INAUDIBLE] are added in a phase-coherent way. But this behavior ends, of course, at the coherence time. Let me just get my notes. So when the time becomes comparable to the coherence time, then we are adding amplitude not as constructive interference-- we are adding B squared [INAUDIBLE] becomes sort of a [INAUDIBLE], and we add things in quadrature. So what happens is at the coherence time, we have created an amplitude which is given by this expression. But now if time goes by, we add in quadrature t over tc chunks of that. So therefore, our B square, which we build up with time, is linear in time and is omega Rabi squared over 4, the matrix element times the coherence time tc. And this here now is our rate in the rate equation. And this is also what we exactly got out of the correlation function formalism. So this is how you should think about it. You go from constructive interference in amplitude and adding things in quadrature. Let me maybe add one more discussion to it, which I hope will help you to see the big picture. We have a matrix element V. And if I just ask you, think about Fermi's golden rule, you would find out that the rate is V square times the density of states at the resonance. And sometimes instead of the density of states, you write a delta function, which is just a placeholder that you should do an energy integral. And then you get the density of states. OK. So this is sort of Fermi's golden rule, and it should be old hat to you. But what I told you now is that what is involved here from the time integration is the correlation function, V of 0, V of t as two different types. But then because of the type integration of Schrodinger's equation, we had a time integral dt. But V of 0, V of t, remember, we have a correlation between the field V of 0. And then it decays with time. This here can be written V of 0 squared times the correlation time tau c. So therefore, the correlation function, it's a general formalism. It allows us to deal with all time dependencies. But in essence, the time-integrated correlation function is nothing else than your operator, your perturbation operator, time t equals 0 squared times the correlation time. And the correlation time, the inverse of the correlation time, is the spectral widths of your drive field. And this is what the spectral widths is in Fermi's golden rule. On the other hand, if you do the integration V of t dt, think about it as Fourier transform, it gives us actually some V of omega. It gives us the Fourier transform. Well, I see it omega. If you just do V of t dt, you get the Fourier transform at 0 frequency. But if you look through the derivation we did that was an e to the i omega resonant term, which we were just spitting out. So we have sort of shifted the origin of frequency. So therefore, the Fourier transform, if I correct for this just offset infrequency which I introduced for simplicity, this is nothing else than asking whether the drive field has a Fourier component which can resonate the drive to transition. And we talked last class about the convolution. This gives actually the power spectrum. It keeps the Fourier transform of it squared. But the power spectrum is, of course-- when you take a field and fully analyze it, [INAUDIBLE] is the power spectrum, the whole power and the whole intensity of your laser, of your field is spread out over the banquets of the source. So when you say, the power spectrum, the power spectrum is automatically intensity divided by spectral widths. And that's where the delta function and the spectral widths in the normal formulation of Fermi's golden rule come in. So I'm just emphasizing I haven't really done anything new than just giving you a general notation. And I would actually say-- whenever you were asking yourself about Fermi's golden rule, this is the full [INAUDIBLE]. This is about a fully time-dependent arbitrary time field. And you realize, what was re-squaring Fermi's golden rule is actually just the simple case, the more general case-- and the more general case is it's a correlation or it's a power spectrum which creates the rate in Fermi's golden rule. Any questions about that? OK. So we have this powerful formalism. So all we have to do is when we want to understand what is the rate and the spectral widths of the spectroscopic features is we have to understand what is the field driving the atom. And then we take the Fourier transform, or we figure out what the correlation is. We take V of 0 squared times the correlation time, and we know what the rate is. And this will allow us major insight into spectral broadening. OK. But after introducing that in general, I think it's time for a simple example. And as a simple example, I just thought I'd show you how we get the natural line widths. And of course, the natural line widths-- I've mentioned several times, we need Optical Bloch equations. We have to really kind of capture the interplay between the coherent drive of the atom and the spontaneous decay. But just phenomenologically, if I say the excited state has a decay rate of gamma over 2, I mentioned it before, we can capture some aspects of spontaneous decay. And that means now, remember, our spectral widths comes from the correlation function of the matrix element. And now the operator is constant, but the state is decaying. So therefore, the matrix element has this exponentially decaying function. And if you then ask, what is the correlation function at time 0 and time tau, we get this. Of course, the Fourier transform of an exponentially decaying function is a Lorentian. And I've shown you now with a different formalism that yes, if you have dampening in the excited state, for instance, you do spontaneous decay, we get the Lorentian. Of course, we know there should be power broadening, too. But of course, don't expect power broadening from a perturbative approach, because a perturbative approach is only valid when we have weak drive fields. Questions? OK. So now I want to-- after this simple example, I want to explain to you Doppler broadening. I know I will present it in a way and draw some conclusions which are usually not done in the normal presentation of Doppler broadening. So what we have to bring in now is that we have moving atoms. That's what Doppler broadening is about. We drive the field with an electromagnetic wave. And we have usually ignored the spatial dependence, assuming the atom was clamped down in c equals 0. But now we have to allow for motion. So the relevant matrix element for which we want to calculate the correlation function has now a spatial dependence. And our correlation function, G of ba, involves now-- I call the Rabi frequency now x for, well, simplicity of-- [INAUDIBLE] took this material, because they use it. OK. So now we have the correlation function. We have the temporal part, t prime minus t. But then we have a spatial part. And this is now the new part which has to account for that the atom is moving from one position to another one. And when we calculate the correlation function, we have to average now over the velocity distribution of the atom. So this new part, which will account for Doppler broadening, I called this part I. And z of t prime minus z of t is simply the velocity of the atom times tau. Tau is t prime minus t. So atoms move by that. And v is the z component of the velocity. So now we have to calculate that. And of course, we assume that we have a Maxwell-Boltzmann distribution. We assume a Maxwell-Boltzmann distribution. And then our expression I, we have to convolute-- we have this term, e to the kv tau. But now we have to convolute it with the one-dimensional Maxwell-Boltzmann distribution, where alpha is the most probable velocity. And since the velocity distribution is normalized, we have this p [? vector. ?] Alpha is the most probable speed, and mu is the mass of the-- sorry, it's not mu, it's M. And M is the mass of the atom. This integral, of course, can easily be solved. And we find that provides-- [INAUDIBLE] Gaussian envelope with time tau, so it's exponential-decayed with time tau. And our rate, which is the matrix element, involves now the temporal integral over-- let me just scroll up. In our correlation function, we had a temporal part and a spatial part. And the rate-- this was Fermi's golden rule-- is the time we've taken over the correlation function. So therefore, we have the exponential [? vector ?] in time. After convolution with the Maxwell-Boltzmann distribution, we have this exponentially decaying term. And the result of that is, well, hooray, we have re-derived the Gaussian profile for a Doppler broad light. OK. But yes, we want to look at this result with some new eyes, because until now, you would have said, OK, that's really trivial. Each atom has a velocity. Galilean transformation into the moving frame means the frequency has shifted, and everything falls together. And yes, that's one way to look at it. In inhomogeneous broadening, each atom has its own velocity. But now we want to look at it from the viewpoint of a correlation function describing the whole system. So we had calculated this correlation function, which the atomic ensemble experiences. And this correlation function here decays as a function of time. And it decays with a characteristic time. Tau c, which is 1 over k alpha. And this is nothing else than the reduced wavelengths lambda bar divided by the most probable velocity. OK. So we want to relate the line widths to some form of coherence. And what we realize is the coherence time of the correlation function, which needs Doppler broadening, is the time it takes an atom with the most probably velocity to move one wavelength. But wait. What we have is, in the Maxwell-Boltzmann distribution, the most probably velocity is also the widths of the velocities. So therefore, what we can say is, if all the atoms would start at one position, after the correlation time tc, the atoms have spread out over one wavelength. So therefore, the correlation time tells us how long is the whole ensemble driven coherently. But once the atoms, due to their motion, spread out compared to their initial position, while extra wavelengths, each atom experiences now a different phase of the drive field. And that sets a limit to the coherence. And this is the point where when we ask how many atoms are getting excited, we can no longer add amplitude in a constructive, linear way. We are adding amplitudes in quadrature. And this is exactly what I explained to you at the beginning of this lecture. So let me just summarize it. Alpha is the thermal spread of velocities. So the keyword is here. Atoms, in a random way, spread out by lambda in the coherence time tc. Well, a question which should come to your mind is now, but what happens when the atoms are in an atom [INAUDIBLE] and they cannot spread out? That's what you want to discuss in a few minutes. But before I do that, let me give you another interpretation, which is helpful. You can regard-- if you have a thermal ensemble, you can say, OK, I have a box. Each atom is a plain wave with a perfect velocity, and now an ensemble of that. But very often, especially if you do localized physics, you want to regard each atom as a wave packet. And if you want to use a localized description of your gas, where atoms are wave packets, then you assume, in a consistent description, that the atoms are spread out due to the momentum spread in the Maxwell-Boltzmann distribution. This is nothing else than h bar divided by the mass and the most [INAUDIBLE] velocity. And this is foregoing [? vectors ?] on the order of unity, nothing else than the thermal de Broglie wavelengths. OK. So now we have the picture that the atom is a wave packet in the ground state. But now we excite it with a laser. Well, if we excite it with a laser, part of the wave packet goes to the excited state. But the atoms in the excited state, because they have absorbed the photon recall of the photon, are now moving away from the ground state part of the wave packet, with the recall velocity, with this h bar k over M. So if I regard the atom as a wave packet, the natural question is, when does the ground state of the wave packet lose overlap with the excited state part of the wave packet? Lost overlap after time-- well, I derive it for you. But it can only be the coherence time. So the time is the size of the wave packet divided by the recall velocity. The mass cancels out. H bar cancels out. And this is 1 over k alpha. And this was exactly our coherence time. So therefore, when I'm telling you you should understand this picture, Doppler broadening is a loss of coherence for the ensemble. You have now two ways to describe it. One is you can say, in a more quantum mechanical way, after the coherence time, the recall velocity has separated the grounded, excited part of your wave packet. Or you can say, when the atoms in the ensemble have a velocity spread of alpha, then they have spread out by the optical wavelengths. So these are two equivalent picture to understand why this ensemble is no longer coherently driven. Yes? AUDIENCE: [INAUDIBLE] in both of the curves, like, why is, in the first picture, the optical wavelength not relevant; in the second one, [INAUDIBLE] not relevant? [INAUDIBLE] distance. PROFESSOR: Well, because these are two different pictures, but the results agree. I mean, the wavelength comes into the picture with the wave packets through the recall velocity, because the recall velocity is h bar k over M, and k is the inverse wavelengths. AUDIENCE: Right. But when we write delta t there, we write [INAUDIBLE] the wavelengths off the packet here, and we could have also just thought of it as, when does the atomic ensemble become bigger than the optical rate? But here, we are [INAUDIBLE]. PROFESSOR: I think for consistency-- I'll give you a quick answer. I should think about it longer, but what we usually assume when we describe atoms by wave packets, we assume the atoms are not cooled below the so-called recoil limit. So we assume that the thermal de Broglie wavelengths is shorter than the optical wavelengths. And so then if you would say, you would expect also that there would be something happening when the wave packet spreads out by an optical wavelength, I want to think about it more. But the quick answer is, just assume what is usually the semi-classical limit of these kind of pictures, where we assume that we have a hierarchy that the thermal de Broglie wavelength is much larger than the size of the atoms but smaller than the optical wavelengths. AUDIENCE: That makes sense. PROFESSOR: A few things happen, really, in intuitive pictures when you cool atoms before the recoil limit. OK. There's one reason why I would like to express it to you. Armed with that knowledge, if I would now ask you-- you have a trapped Bose-Einstein condensate, and you take the spectrum. What is the Doppler widths of the spectrum of a Bose-Einstein condensate? [INAUDIBLE] extra Maxwell-Boltzmann distribution. No. The condensate is different. It's in one quantum state. But now you can choose your picture. One picture you can take is you can say, the de Broglie wavelengths here has to be-- you know, the wave packet loses overlap when the excited state moves one de Broglie wavelength. But the condensate is fully coherent. So you would now say, maybe I should think about if-- if the condensate part of it is coupled to the excited state, and with a recoil, the excited state component has moved the size of the condensate, replacing the de Broglie wavelengths by the size of the condensate. And this is a correct answer. You would then find out what is, quotation mark, the Doppler broadening of a condensate. Of course, you could have also said, the condensate is a certain size, h bar divided by the size is the momentum spread if you do Heisenberg's uncertainty relation. And now I plug in this momentum spread into a formula for the Doppler broadening. And you would get the same result. But especially when you think in terms of a coherent wave function, this picture of losing overlap between the two parts of the wave packets is very intuitive, very useful. And it actually guided a lot of our intuition when we looked at the limitations of super radians and optical spectroscopy with Bose-Einstein condensates. Any questions? OK. So now we are ready to take it to the next level. When I told you that the spectral widths is the inverse of the coherence time. And one way to think about the coherence time is that the particles spread out over one wavelength. So if you take this thought seriously and say, what happens if I confine atoms in a container or an atom trap to less than the optical wavelengths, then you would say, they can never spread out by an optical wavelength. Does it mean that the coherence time is now infinite and that we can do spectroscopy, which is no longer affected in any way by Doppler broadening. Well, what I just motivated in words is the so-called Lamb-Dicke limit of tight confinement. And as I want to show you now, yes indeed, you have a very, very sharp line which is not broadened by Doppler broadening, which is not shifted by the recoil shift. It's really the unperturbed line of the atom which can be probed by confining the atoms to less than a wavelength. So therefore, let's now discuss the line shape of confined particles. So what I want to present you now is we have particles trapped in a harmonic oscillator. And in one limit, which I want to explain you, we should just find the normal Gaussian Doppler profile which we have obtained for free gas. This must be the limit when the [INAUDIBLE] confinement is very weak. But for tight confinement, we should actually find, unless we assume other means of line broadening, a delta function spectral feature. And I will explain to you that this is actually the same as the Mossbauer effect. It's a Mossbauer line due to the confinement. So therefore, to have trapped particles allows us to go to the ultimate limit in precision spectroscopy. What happens when you have trapped particles-- the Mossbauer effect, which I mentioned, or simply the effect of confinement-- in other words, the trapping potential is completely eliminating the Doppler effect. But I want to be specific. It only eliminates the first-order Doppler effect. Everything I just did with the correlation function assumed first-order Doppler effect. If you want to get rid of the second-order Doppler effect, then you need some form of proving. But usually, when you do experiments with trapped particles, you do confinement and cooling at the same time. OK. Let me start out with very basic things. So let's talk about the spectrum of an oscillating emitter. If we have an atom, it undergoes a transition from excited state b to excited state a, this is an internal state. But now we want to include motion. And we have to include the external degree of freedom. And for our discussion right now, the external degree of freedom is [INAUDIBLE] trapping potential. So now we look at the combined system, combined-- we can say Hubert space, which combines external and internal motion. And of course, the external motion is now quantized. I can't assume, but it doesn't really add anything to it at this point-- that the trap frequency and the ground and excited state are different. I simply assume that the trap frequency omega trap is the same. This, of course, is excellently fulfilled in ion traps. If you reduce spectroscopy of neutral atoms in a dipole trap, of course, the ground and excited state may experience a different AC stark shift. And then you have to account for two different frequencies. But let me just make this simplifying assumption. So if you assume that an atom emits radiation, it will, for energy conservation, emit at the electronic energy. But then there is an extra term, which, in general, is the energy of the external motion, or the trapping potential for the initial state minus the final state. And if we now make our simplifying assumption that everything is harmonic, that hyper-potential is harmonic, and the trapping frequency is the same in the excited state and the ground state, we simply have the electronic energy plus n quanta-- n is the change of the number of quanta of the harmonic motion. I want to point out, it looks so trivial. But you should at least think for a second about this statement, that this formula includes the Doppler shift and the recoil shift. And of course, this is trivial, because we are talking here about the total energy of the external state, the total energy of the final state after photon emission. And the energy and the trapping potential includes all the kinetic energy of the particle, which includes whatever comes from [INAUDIBLE] velocity or from the motion of the atom. So therefore, we obtain what is called the sideband spectrum. I'll just show you a stick diagram. Here is the electronic transition. And then we have sidebands. And the spacing of those sidebands is nothing else than the harmonic oscillator frequency. Any questions? At that level, I want you to appreciate that this is radically different from Doppler broadening. There is no Doppler broadening. By fully quantizing the motion in the harmonic oscillator, we obtain a discrete spectrum. And what I want to show you is when we calculate the intensity in the peak for strong confinement, almost all of the intensity's in the central peak. And therefore, there is no Doppler broadening. But I want to later show you how we can go from the discrete spectrum back to the Doppler broadening which we just described in free space. You should say, well, but the motion, the recall, it must come in. Where is it? Well, it's not in this stick diagram. But the sticks are the only possibilities for the possible photon frequencies or photon energies. But the question is, what is the amplitude? What is the probability that this will happen? And you already see where I'm aiming to. In the limit that we have many, many sticks and we are not resolving the sticks, we will get back to them in the standard Doppler broadening. So the big question is, how many of those sticks do we have? Are we in the limit where things are heavily discrete? This is our new limit? Or do we have many, many of them? So therefore, the recoil and the velocity, they really enter when we calculate the intensity. And whatever our formulation is with Fermi's golden rule, the rate is proportional to the relevant matrix element squared. And now I want to show you how we calculate those matrix element. You had a question? AUDIENCE: But is it true that each of these sticks has the intrinsic line width of the atom? PROFESSOR: Yes. We come to that in a few minutes. I ignore here the spontaneous broadening just for pedagogical reasons. But a little bit later, I will-- I first want to sort of discuss the number of sticks. Do we have a few? This is sort of new, then we have only few sidebands, and we have the Mossbauer effect. If we have many, that's sort of more the continuum, which we described with the classical velocity distribution. That's my message number one. But then the next message is, do we resolve the sticks? Do we have resolved sidebands or not? And for that, the criterion is, is the natural line widths large or smaller than the sideband spacing? It's not just one parameter; there are two parameters. One is which will be the Lamb-Dicke parameter-- how many sticks do we have? And the second question is, do we resolve the sticks? And you can say there are four different regimes, you know-- yes or no for question one, or yes or no for questions two. Other questions? OK. So the rate is proportional to the matrix element squared. And yes, we have all kind of the matrix elements involving the internal degree. But the new thing is the matrix element for the center of mass wave function of the atom, which is-- you can see just the emission where the polynomials of the harmonic oscillator. So we have the eigen functions for the harmonic oscillator between initial and final state. And what acts on the only part of the electromagnetic field operator, which acts on the position of the atom, is this term, e to the minus ikr. OK. I mentioned already that the new regime is that the confinement is tight. So let's just look at this situation. When kr is much smaller than 1, then we can expand this exponential into 1 minus ikr. And now I want to remind you that the position operator, when we treat the harmonic oscillator, is nothing else than a plus a dagger. So therefore, if kr is small and therefore we can do the first-order expansion of e to the ikr, our spectrum, our operator is here-- 1 minus e [? vector ?] a plus a dagger. And therefore, the only possible sidebands are the one where the change in harmonic oscillator quantum number is plus or minus 1. So we are already obtaining the result, which is called the Lamb-Dicke limit when kr is much smaller than 1, that we have a strong carrier. We have only sticks-- the delta n equals plus, minus 1. And the intensity in each of those sticks is actually proportional to k squared r squared, which is nothing else than the square of the extension of the atomic wave function. And k is 1 over lambda bar squared, divided by the optical wavelengths we get. So we see already-- I mean, without any major formalism or mathematical tools-- what happens in the limit of tight confinement. The spectrum of confined particles is eclectic, the spontaneous line widths. A delta function without any Doppler shift, without any recoil shift, right at the resonance frequency, electronic frequency of the atom. The only thing which is reminiscent of the motional degree of freedom are those small satellites. But their intensity goes to 0 with the extension squared over lambda squared. So therefore, if you confine the particle to less than an optical wavelength, the older picture I told you, it can never spread out over wavelengths, can never get out of coherence with the drive field. And here, we have a quantitative description that at that moment, we can obtain spectroscopic information about the resonance completely unperturbed by motional effects. Questions? OK. So that was maybe the most fun part, the extreme limit and you realize what happens. But now we want to sort of fill in the gaps. I first want to sort of contrast what I just described to you with a semi-classical picture. The semi-classical picture-- if you have an emitting oscillator or absorber and we have electromagnetic plane wave, we can now ask, what is the phase of the plane wave experienced by the atom? And of course, the phase is affected by the motion of the atom if we assume the atom moves [INAUDIBLE] with harmonic frequency omega t and an amplitude x0. Then this here is the phase seen by the atom in its own reference frame. And if I then define-- with another quotation mark, because it's sort of something which needs explanation. If I define an instantaneous frequency, which is nothing else than the derivative of the phase, then I'll retrieve the normal Doppler broadening. So now you see where sort of normal Doppler broadening would come in. But the question is, you cannot measure an instantaneous frequency. It would violate the Fourier theorem. But if we can apply-- if the atom oscillates slowly enough-- the motion is slow enough that we can apply the concept that we can look at the frequency the atom experiences, you would at least say, before the atom changes its velocity, it should see a few cycles. If it oscillates fairly fast, this concept, of course, cannot be applied. But at least you see where your normal Doppler shift comes in. It comes in in the concept of an instantaneous frequency. Of course, what we should do now is we should not take the concept of the instantaneous frequency. We should do rather do it correctly. What we have is we see we have the phase of the atom. So therefore, we should-- by using the motion of the atom for x, we should have formed Fourier transform. The Fourier transform tells us what is the tonal spectrum which the atom experiences. So therefore, we take this plane wave, we put in the oscillatory behavior of the atom, and then we take away transform. OK. So the electromagnetic field seen by the atom is an amplitude which is the cosine of the phase. And the phase has a temporal dependence that's just the frequency of the plane wave. But now the precision involves the oscillation of the atom at the [INAUDIBLE] frequency omega t. And what I've introduced here is the data [? vector ?] is called the modulation index. This is the relevant quantity. It is k, which comes from the plane wave, e to the ikr, times the amplitude of the atom. And so the modulation index is nothing else than the amplitude of the atomic motion divided by the reduced wavelengths of the light. Just if you remember for a second, this extreme stick diagram where we had one big stick and two smaller sticks, remember, the intensity in the smaller sticks was the extension of the atom divided by the wavelength squared. So we had exactly the same parameter, but the previous description assumed a quantized picture for the harmonic oscillator. I really used a and a daggers in the atomic wave function. This is a semi-classical picture where I treat the oscillation of the atom in a classical description. But in both cases, of course, the relevant parameter is the ratio over which the atom moves-- the amplitude of the atomic motion or the size of the atomic wave function in relation to the wavelengths of the light. OK. So we want to Fourier transform this function. And this gives us the spectrum. I can show you that the result is that we can have a Fourier expansion of this amplitude, and it involves basal functions. So it is the basal function Jm which tells us what the intensity is in the n's sideband. And the argument of the basal function, whether we take the basal function at the origin or at a finite argument, is given by beta modulation index. You can't immediately [INAUDIBLE] in just two lines. We write the Fourier transform. All you have to use is those identities which involve the basal function. Whenever you have the cosine of a sine or the sine of a sine and you Fourier transform, you get, naturally, basal functions. OK. So let's just look at the result. We talked about the stick diagram. The stick diagram was actually motivated quantum mechanically. But now we also find a stick diagram purely classically because of the periodic motion in the harmonic trap. But now what we obtain in the semi-classical limit, we obtain what is the height of each stick. Well, the height of the stick, which is n sidebands away, is given by the square of the n's basal function. And beta is the modulation index. And if beta goes to 0, that means the atom is not moving at all. The amplitude x0 is 0. This is the limit of tight confinement. Then all basal functions are 0 except the 0's order. And that means we are back to a single delta function, a single stick in our stick diagram. So let's now take the result and discuss it. So how does the spectrum look like? Well, I want to give you two limits where the atom is extended much larger than the wavelengths. This means large modulation index beta and small modulation index beta. But first, in the limit where beta is large-- for large beta, the argument of the basal function becomes just cosine beta minus phase. So if we use that-- if we assume a thermal distribution where beta over kt d beta squared. So we take our spectrum, and we just convolute it with a thermal distribution of amplitudes x0, which is, of course, just the Boltzmann factor. And in that limit, we actually obtain a spectrum where the envelope of the sticks looks like the Doppler width. So if those two conditions are fulfilled, then we obtain the normal Doppler widths. If, in addition, we assume that the sidebands are not resolved, because the natural line width is larger than the spacing. So if we assume that in addition, then we just find the normal Doppler broadening how we have derived it as in free space. So normal Doppler broadening is the limit of the large modulation index, a thermal distribution of modulation index in the case of not resolved sidebands. The opposite limit, of course, is when beta is much smaller than 1, it means the amplitude of the oscillating particle is smaller than the wavelengths. I mentioned it a few times, so I should write it down. This is the Lamb-Dicke regime. And then the relevant approximation for the basal function is in the limit. And this limit is 1 over n factorial times beta over 2 n. And for the case that n is 1, this is just beta. And the amplitude squared is beta squared. It's x0 over lambda squared, as we had discussed before in the quantum mechanic unit. So you'll see that to use matrix element for the harmonic oscillator or use a semi-classical Fourier transformed with a basal function, both lead to the same result, that deep in the Lamb-Dicke limit, we have essentially three peaks. And the satellites are quadratically small in the modulation index squared. So this regime is particularly interesting for atomic locks and for meteorological applications if the sidebands are resolved. If the sidebands are not resolved, you have sort of a line shape which depends, let's say, on temperature, because x0 squared, if it's the thermal excitation, is proportional to temperature. So as you cool it down, you will actually see that your line shape changes. But the good thing is that once you resolve the sidebands, the line shape of the central carrier does not change anymore. And once you can resolve the sidebands, you can just observe the central carrier, and you obtain spectroscopic information which is no longer blurred, which is no longer affected by motion or by temperature. And that's, of course, a regime where the atomic frequency standards want to operate. Questions? OK. Yes. Let me just write that down, because this is important. For resolved sidebands, you have sharp lines, no motional broadening. And the physics I described to you is actually analogous to the Mossbauer effect. In the Mossbauer effect, the intensity of the recoil-less emission of this recoil-less line is described by the [INAUDIBLE] [? vector. ?] So in this case, the [INAUDIBLE] [? vector ?] is 1 minus the probability for the two sidebands. So the same concept of Mossbauer line and [INAUDIBLE] [? vector ?] describes the physics for tightly confined particles. So if I use the analogy to the Mossbauer effect, the Mossbauer effect is called the recoil-less emission and absorption of x-rays. So what we have here is we have a recoil-less absorption and emission of photons. Of course, the photon which is emitted and absorbed has momentum. There should be momentum recoil. So the question is, where does the momentum recoil go when the confined particle emits or absorbs on the carrier of the central line. How do we reconcile the result I derived for you with momentum conservation? A trapped particle, tightly confined. And photon comes, has momentum, the atom absorbs it. But the spectrum does not show any evidence for recoil shifts and such. Colin. AUDIENCE: Must be absorbed by the trap somehow. PROFESSOR: Must be absorbed by the trap, yes. Your trap is anchored in the laboratory. And you transfer the momentum. The atom is attached to the trap. The trap is attached to the laboratory. So the object which takes over the momentum is your whole apparatus or, in the extreme case, a whole building. And there is, of course, a kinetic energy associated-- momentum squared over 2 m. But the mass is now the mass of the building. So therefore, there is no energy associated with absorbing the recoil. So it is as if-- and this comes back to some earlier remarks I've made-- is that when you have this absorption in the Lamb-Dicke regime is it is as if your two-level system has an infinite mass behind it. And that's sort of the situation how I told you you should often think about it, you should separate effects of the internal degree of freedom and the external degree of freedom by just assuming I can assume first that the atom has infinite mass. And once the atom has infinite mass, the motion degree of freedom doesn't matter. What I just explained to you is a practical way to endow your particle with infinite mass. Just connect it with type confinement to your apparatus. And then for the momentum exchange, there is a photon field, it is actually the mass of the whole apparatus [INAUDIBLE]. Questions? AUDIENCE: Is there a more direct way to think about this for maybe the example of a magnetic trap, the mechanism by which this momentum is transferred? Or absorbed. PROFESSOR: OK. Magnetic traps for neutral particles are usually not in the Lamb-Dicke regime, so you have to be a little bit careful. But the explanation would be the same. The magnetic fields are like tight springs which connect the neutral atom to your coins. And therefore, you should just think about the magnetic trap in a mechanical model. Your neutral atom is connected to your coils with strings. And if you now hit the atom with a photon, because of the quantization of the discreteness of this specturm, the photon, in most cases, does not have enough recoil to create a mechanical oscillation of your particle. The momentum goes through the sphinx to the support structure. But occasionally, with a probability which is a modulation index squared, you will actually promote the particle to the first state of harmonic motion. And then the atom has acquired some kinetic energy. But this probability can be made as small as you want by going to a smaller and smaller modulation index. That's the way how I would think about it. Nancy? AUDIENCE: About the multiple lines that we get, I was wondering if the levels of the harmonic traps itself are blurred, which could happen in the lab if the trap depth is moving, for example-- the levels of the trap would get blurred. Would that result in additional broadening of this? Or how would that affect? PROFESSOR: It depends. I think if you have some temporal broadening, you know, you just plug it into your correlation function, whatever shakes your system. If that means the atom sees some shaking in the phase of the electromagnetic field, it affects it. If you have an ensemble of atoms or if you do the experiment repeatedly, and your measurements are the ensemble, and every time you do the measurement, your magnetic trap has a slightly different [INAUDIBLE], and therefore, a slightly different confinement, well, what you would see is that those sidebands fluctuate, that the carrier is independent of the trap frequency. So therefore, the carrier is actually the central peak-- would not be broadened by fluctuations in the harmonic confinement. Let me just make one comment. I derived the result for you-- at least that one-- by taking the amplitude of the atomic motion, the amplitude of the phase, and doing the Fourier analysis. I mean, this is exactly what we learned from the formalism of correlation function. You should take the amplitude of the perturbing field and fully analyze it. I didn't phrase it here in the language of correlation functions. But what I did was exactly what we learned from the correlation function formalism. OK. Let me maybe summarize in words what we learned. So what we learned from this discussion is that what matters for line broadening and obtaining spectroscopic information is the accumulated phase, the phase which the atom accumulates. And if different atoms in the ensemble accumulate a phase which is different by 2 pi, at that moment, we have reached what we call the coherence time of the correlation function. And the inverse of this time is a line broadening. But now we also discussed the case of tight confinement. The atom can be in very, very rapid motion. And the phi dot, the change of phase can be very rapid due to the instantaneous velocity. But if the atom turns around because it's in a singulatory motion, positive and negative Doppler shifts completely cancel, because you never allow the atom, in this periodic motion, to acquire a net phase. And therefore, the motional broadening is absent. So one way to think about this carrier is that the atom rapidly oscillates through plus kv and minus kv Doppler shifts. And the two cancel. So this is the reason for that. OK. I think we are now very well prepared for Dicke narrowing. Actually, I have to say, it's the first class this semester that I was teaching a little bit faster than I assumed. So I'm now right at the end what I prepared for today. But I know my notes sufficiently well that I can go on for 10 minutes. So the Dicke narrowing-- the last time I looked at it was two years ago. But let's get the physical picture. So I want to now apply what we have learned not to a trapped atom, but to an atom which is embedded in buffer gas. So just think one rubidium, or one sodium, or one lithium atom. And it is surrounded by a buffer gas of argon or neon. And I know sometimes when we do saturation spectroscopy to stabilize our lasers, we have a little glass cell, which has sodium or rubidium in it. But we also put an argon buffer gas into it. So that's the situation I want to describe now. But Colin, you had a question. AUDIENCE: Yeah. You wrote down the condition for resolving the sidebands as your trapped frequency being larger than your natural line width. An alkalizer-- if you had, say it were 10 megahertz line width. How do you actually resolve this without a ridiculously-- because people do sideband [INAUDIBLE], and they don't have 10 megahertz trap frequencies, do they? PROFESSOR: OK. So good question. The question is, the resolved sideband limit, how can we reach it? Well, it can be reached in ion traps. In ion traps, because you can put kilovolts on electrodes, you can really create harmonic oscillator frequencies which are many, many megahertz. And then you are at the resolved sideband limit, assuming that the natural lifetime, if that's the case, of many ions is in the megahertz [INAUDIBLE]. For neutral atoms, it looks like, you know, mission impossible. However, there's a way out, and this is the following. One is you can maybe [INAUDIBLE] one-dimensional optical latice. So in the latices, you at least have tight confinement of many, many kilohertz. But now you want to use a very narrow transition. If you use a very, very narrow transition, then even for 10 kilohertz external harmonic oscillator potential, [INAUDIBLE] resolved sideband [INAUDIBLE]. Now, for alkalis, you won't find an excited state which has a natural line width of 10 kilohertz. And this will be our discussion on Wednesday. If you use a Raman transition, we go from one count state with an off-resonant laser to another ground state, [INAUDIBLE] will be two photon transitions. But those two photons, since there's no intermediate state, can be regarded as, click-click, you absorb two photons. And in some picture-- that's a message I will give you next week-- the equivalent to a single photon. So now you have a two-photon transition, which transfers recall to the atom. So the effective wavelengths of the two-photon transition is because you have twice the photon energy. You have two photons involved which both have a recoil. So the effective k vector is two times the k vector of an atom. But the spontaneous line width is close to 0, literally 0, because you have a Raman transition between two common states. If you do Raman sideband cooling of neutral atoms, then you reach the Lamb-Dicke limit, you reach the limit of strong confinement. But you need a better [INAUDIBLE] transition. And there are, of course, some atoms which have a very narrow clock transition. But for many, many atoms which have hyperfine structure, you can resolve to Raman transition. OK. Let's now talk about Dicke narrowing. So we have a situation that we have an atom in buffer gas. And in most situations, when you put an atom into buffer gas, you get what is called collisional broadening. I will talk about collisional broadening on Friday. Just a reminder, we have class on Friday but not on the following Monday. So on Friday, we'll talk about collisional broadening. And I will discuss, for instance, the model-- an atom in the excited state, when it collides, it gets de-excited. And then you have pretty much a situation where you have, in effect, a shortened lifetime of the excited state. And what you get is a Lorentian which is broader, which has a width not of the natural line width, but a width which is [INAUDIBLE]. But there are situations-- and that's what I want to discuss here-- that we have atoms in a more benign buffer gas. Where we can assume that this is actually fulfilled, that collisions do not change. Well, they're not de-exciting the excited state. But they're not even changing the coherence between counter-excited state. So the phase evolution, the internal state, grounded excited state just-- if you can assume you have a Bloch vector which oscillates, and the Bloch vector superposition [INAUDIBLE] excited state oscillates at the natural frequency, and this Bloch vector just rotates, it doesn't have a hiccup, it doesn't change its phase when the atom collides with the buffer gas atom. So we assume that we have such a buffer gas where collisions don't change the internal coherence. And by internal coherence, I mean the phase between the ground state and the excited state. So in this situation-- but it's actually a very important situation which has been reached, in many cases. In this situation, we have-- thus, the buffer gas acts only on the external motion of the atom. And now you can say, in some way, the buffer gas acts like a trap. The particle wants to fly away, but it collides with the buffer gas atom. And with a certain probability or after a few collisions, it returns back to the origin. However, it's a lousy trap, because there is some randomness and effusive motion. So if you want to describe it as a trap, it would be a trap with a wide spread of trap frequencies. OK. So if we use this picture now, what we have learned from ion traps-- remember, we had an ion trap with a sharp carrier. And then we had sidebands at the trap frequency. But if we have sort of now a lousy trap which each realization, each moment has a different trap frequency when waving all my arms, I would say for the other part of the ensemble, we get a carrier, and we have something else. And for another realization, we have another trap frequency. So if I use a little bit of artistic intuition here, I would expect, based on what we learned from the previous discussion, that in such a buffer gas, I would have a sharp carrier, and then I would have sort of a pedestal, which is the envelope of many trap frequencies. And we know sort of that the envelope of all our sticks-- this was actually given by the Doppler effect. So what we may expect now is that in this situation with buffer gas, we get a sharp line. And then we have this broad pedestal, which you can think [INAUDIBLE] intuitive picture as smeared outside bands. And I may call that the Doppler pedestal. And we would expect-- and I discussed that with the basal function, that there is one limit where the envelope of all those sticks eventually looks like a Doppler-broadened line. Anyway, time is over. But let me just give an outlook. On Friday, what I want to do is I want to calculate the width of this line with you. And remember, all we have to do is we have to calculate the correlation function. Previously, when I derived for you Doppler broadening, the correlation function was the kx becoming kvt. kvt, where x became vt how much the atom moves with the velocity v. By simply replacing the linear motion, V times t, with a diffusive model, we can calculate the line shape. And we will actually find that the central line is not infinitely sharp, but it has a width which is given by the diffusion constant. And if the diffusion constant is very, very small, we approach a very sharp line. And the final comment is, and this is called Dicke narrowing. It is this counterintuitive result that collisions, if they have those properties, are not broadening the line. They actually narrow the line from the [INAUDIBLE] Doppler width to something which is much sharper. And this has been useful for high precision spectroscopy. But I think with the concept which we discussed today of confinement, you realize why collisions can actually reduce the line widths, namely by preventing the atoms from acquiring random phases with respect to the drive field [INAUDIBLE]. Any questions? So see you on Friday in this other building, this other lecture hall that's just been announced on the website.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
14_Atomlight_Interactions_III.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good afternoon. Let's get started. So we continue our discussion today about light atom interaction. And just to sort of remind you where we are, we started last week, even before spring break, to talk about the matrix element which provides a coupling. So now we have a coupling between the two states. And we want to understand what is the coupling doing to the system. What is a dynamical evolution of the system? How does the atomic wave function evolve when we couple two states using optical fields? And well, as usual, we start with the basic phenomena. And this is we do perturbation theory with a dipole Hamiltonian. And we have done that on Monday. We've done perturbation theory. And I made an important distinction between a monochromatic case and [INAUDIBLE] case. The monochromatic case gave us in perturbation theory this result. And if you stare at it for a while, you see, well, these are just Rabi oscillations. It's not the formula of Rabi oscillations with the generalized Rabi frequency and power broadening because it's perturbation theory. So these are Rabi oscillations in the perturbative limit. But also on Monday I showed you that for interactions with monochromatic light, we can just rewrite the Hamiltonian. That it not just looks like, it is exactly the Hamiltonian for spin 1/2 in a magnetic field. And we have already discussed the solution. So, therefore, we know for monochromatic radiation, we can just go beyond perturbation theory. We can solve it exactly. OK. So monochromatic radiation we understand. At least as long as we have just the coupling of the atom to a single mode of the electromagnetic field. And now we come back to the broadband case. I give a little bit more of a summary and an outline after I finish the broadband case. So what we have right now is we have Rabi oscillations, which we derived for a single mode. And the broadband case is such that we now assume that we have a spectrum of frequencies. And we assume that to be flat and broad in a moment. And what we have to do now is we have to substitute in this formula the Rabi frequency by the electric field. And the electric field becomes using the connection between electric field and energy density, eventually the Rabi frequency squared gets replaced by an integral over the spectral density. It looks mathematically exact. But, of course, you all recognize that I've made a very important assumption here. Namely, that there is no correlation whatsoever between the different frequencies. Because by integrating here, I'm just summing up the e squares assuming that there's no interference, no coherence, no correlation between the different frequencies. OK, so with those assumptions I can now just formulate the mathematics exactly. I calculate now the probability to be in the excited state by assuming we have this kind of pertrubative Rabi oscillations at every frequency component. But we integrate over all frequencies. Any questions? So now-- and this is where I ended the last lecture. And there are actually now, this is really an interesting case. I know I sometimes spend a lot of time on sort of where the math is simple, but the physics is really interesting. So the probability to find an atom in the excited state is now this function which replaces, which are the Rabit oscillations, convoluted integrated over the spectral density. And now we have to consider two cases. If this function is flat and is spectrally very broad, we can pull it out of the integral However, this function here, which are your Rabi oscillations-- sine squared of an [? augument ?] divided by the [? augument. ?] I've plotted it here for you. This function is actually very peculiar. It has a height of t square. If you're doing Taylor Expansion for short times t. But the width is t to the minus 1. So as time goes by, it gets narrower, and taller, and taller. And if you take t squared by t to the minus 1, you get t. So the integral over it is just t. So in other words, we have a function which goes as t square. But the integral is t. And if the width is so narrow that the width doesn't really matter, we can say this function has become a delta function. But the integral over the function is t. So it becomes t times the delta function. And now we the two limiting cases. If very short times t-- this Rabi oscillation function is extremely broad. And at infinitesimal times t, it is broader than any spectral bandwidths. So actually what we are doing is when we solve this integral at very short time, this is broader. And we pull this out of the integral. And this gives us a prefect of t squared. So, therefore, the excitation probability is t squared times the integral of the spectral function which is just the totally intensity. However, if you wait a little time, and eventually if the time is such that the time is longer than the inverse bandwidths of the spectral radiation, then the spectral radiation is broader than this function, f. And we pull this out of the integral. And then what we have is we have simply the integral here over the delta function which gives us a factor of t. So we have two functions. And whichever is broader can be pulled out of the integral. And that means in one case, we have a probability which is t squared times the total intensity. And in the other case, we have a behavior which is the time times the spectral density at 0 detuning. OK. So now let's-- just a side remark before I interpret this result, this function you would recover actually the Rabi oscillation. I plotted it in that way emphasizing the amplitude and the width. But if you look for fixed detuning and you vary the time, this function is sort of spreading out and at a fixed detuning, you'll go up and down. And these are the Rabi oscillations we have discussed for monochromatic radiation before. So these two limiting cases are actually very important. At very short times, the behavior is, you know, we have broad spectral radiation. But if the time is shorter than the inverse bandwidth of the radiation, even the broadband radiation is like monochromatic wave. And the short time behavior is t squared times the intensity. That's exactly what we got for monochromatic light. In other words, if you're broadening, you have a broad spectral source, but if your inverse time is broader than the broadening of the spectral source, you're back to the monochromatic case. So a lot of people get confused. I mean, I see that often in part 3 oral exams, that no matter what your spectral bandwidth is-- unless it's infinite. But let's discuss a pathological case. For any broadband light, at short moments, the system evolves as t square. And t square is the hallmark of coherent time evolution. The amplitude goes with t. The probability goes with t square. And only when the bandwidth of the spectral light dominates when we can replace this by the delta function. Then we are in the regime, which you all know is Fermi's Golden Rule, where we [INAUDIBLE] equation. And it all comes from this formula. It all comes from assuming Rabi oscillations. But then performing the spectral integral over the Rabi oscillation. This takes us from the narrowband case to the broadband case. Any questions about that? Yes? AUDIENCE: So what says the bandwidth here? Because we assume its narrowness arises just because of the short [INAUDIBLE]. PROFESSOR: I will use an LED which has a few nanometer bandwidths. Use sunlight which has a few hundred nanometer bandwidths. Whatever your light source is. At this point, it's a general discussion. And I'm not going beyond the two limiting cases here. Of course, if you have a complicated spectral distribution, well, you're on your own to solve this integral. But at short times, we have this behavior. At long times, we have this. And the two regimes are separated by the time where I have a crossover in one case. This is broader and I can pull it out in the other case. This function is broader and I pull it out of the integral. And just where this happens, this is where we have the transition from t squared to t behavior. From coherent evolution to rate equation. OK. Yes. So let's now look at the situation of broadband light. Because later today I want to discuss with you Einstein's a and b coefficient. A very classical topic of atomic physics. A very famous concept introduced by Einstein. And actually I want to use the perturbation theory for broadband light which we have now formulated to derive for you the b coefficient of Einstein's a and b coefficient theory. OK. So for large time, we can now talk about rate. Which is the probability which increases linearly with time per unit time. And this was the matrix element squared. 2 epsilon 0 h-bar squared, 2 pi. The 2 cancels. And then we have from the delta function, the spectral density at 0 detuning. Which is the spectral density at the resonance frequency. So we have a rate equation now. That the rate equation is the b coefficient. Einstein's famous b coefficient times the spectral density. And the b coefficient is now the proportionality constant between in the equation above. Which is pi d squared. But now in all the formula for the b coefficient, there's a factor of 3. Because the assumption is made that we have isotropy of space. The atom are randomly oriented. And, therefore, dx squared for given polarization, the dipole moment projected on a polarization of the light which is dx squared is just 1/3 of the absolute value of the dipole moment squared. So in other words, just to remind you what I have actually discussed is nothing else than Fermi's Golden Rule. And I could've reminded you of Fermi's Golden Rule where the rate is given. I just use a standard notation of textbooks. You take the matrix element squared, you multiply by 2 pi, and then you have a delta function. And the delta function implies a delta function is always a reminder that it needs integration. So whenever you have a delta function in Fermi's Golden Rule, you have to integrate. And there are two possibilities. You have to integrate over the spectrum of external fields. That's what we just did. The other possibility is, which doesn't apply to what we just discussed, that you have to integrate over a continuum of final states. This will be important when we use a Fermi's Golden Rule expression to talk about spontaneous emission where we have a continuum of final states. So anyway, I could have just said, let's start with Fermi's Golden Rule and let's jump to the final result. But I really wanted to emphasize here the sort of intimate connection between Rabi oscillation, the t square dependence, and how this turns into a rate equation. OK. Let's just summarize what we have done in a table. We have seen two different regime. In one case with the Rabi resonance, we are discussing a single final state of the atom. A single mode of the electromagnetic field. All energy levels, all states are discrete. We are talking about unitary reversible time evolution. When we had rate equations, we are talking about many final states. We integrate over them. Or and/or many modes of the external field. We are naturally dealing not with a discrete number, but with a continuum of states. The time evolution has become irreversible. And is therefore no longer unitary evolution, but it's a dissipative evolution. And all this came about not because we have spontaneous emission. I will tell you throughout this course that spontaneous emission is not as spontaneous as everybody assumes. Spontaneous emission is actually unitary time evolution. Unless you discard information. But a lot of people think rate equation irreversibility comes from something which is genuinely spontaneous and irreproducable. I don't know anything in physics which is spontaneous and irreproducible. But we come to that later. And this is an example where we obtain rate equation by simply driving a system. And the irreversibility comes by performing the integral over the spectral density. So let me just write that down. Due to integration. Since we integrate over an infinite number of modes or states. Any questions? OK, great. I wanted to make sure that this is very clear. OK, at this point, let me just summarize where we are in our discussion of atom light interaction. We've actually made a lot of progress. We have discussed matrix elements. We have discussed the coupling of atoms to an external field at the level of the Schroedinger equation. And we have done perturbation theory. And in perturbation theory, we found Rabi oscillations and we found rate equations. That's where we are right now. So the feature now which is missing is, of course, damping spontaneous emission irreversability-- another form of irreversibility. Right now our Rabi oscillations are undamped. Whether we obtain them in perturbation theory or whether we use the spin formalism to get them in the resonant in the strong coupling case. And here, for the rate equation, the way how we have solved it, the probability to be in the excited state just increases forever. The system will never reach equilibrium. But that means in both cases, we have a missing element. And this is spontaneous emission. So for the next hour or two, we'll talk about aspects of spontaneous emission. Spontaneous emission will actually eventually lead to damping of Rabi oscillation. And to a saturation of the excitation. OK, so we're now discussing spontaneous emission. And we will discuss it in actually three levels. One is I will discuss Einstein's a and b coefficients. I sometimes hesitate. Should I really discuss Einstein's a and b coefficients? It's sort of old fashioned. And I have already in perturbation theory given you a microscopic derivation of Einstein's b coefficient. But everybody who is an atomic physicist knows about Einstein's a and b coefficient. It was really a stroke of a genius to do it. And it becomes sort of our language. So what I'm doing here is I'm not beating it to death, but I give you sort of a short summary. It's also sort of I make a few comments which is actually amazing. How Einstein actually got results from the a and b coefficient which you can only get otherwise if you quantize electromagnetic field. So it's also sort of historically interesting that Einstein actually developed the theory of the a and b coefficient before the Schroedinger equation. Before quantum mechanics was developed. And often you call Schroedinger equation the first quantization and the field quantization, the second quantization. So in some sense, Einstein actually preempted or had already the results of second quantization before first quantization was developed. Anyway, it's a landmark paper, how Einstein did it. And that's why I want to discuss it. But it's partially also in order to give you the historical context. But then, of course, we want to use the modern formalism of use of quantization of the electromagnetic field. And we have already obtained just now the result for Einstein's b coefficient by just looking at the induced by the absorption rate or the stimulated rate. But then eventually having a microscopic quantization. By having a quantization of the electromagnetic field, we can also do now microscopic fully quantum first principle calculation of the a coefficient. So then we have already the b coefficient, we get the a coefficient out of microscopic calculation. So we don't really need Einstein's treatment of a and b at this point. But it's nice to see the connections. So anyway. So this is the agenda. Einstein's a and b coefficient to pull out spontaneous emission without even putting it in. Then we'll talk about field quantization which automatically leads us to a treatment of spontaneous emission. Any questions? So how was Einstein able to show that there is spontaneous emission without sort of knowing the quantum character of fields? Well, the point was he knew and understood that there would be thermal equilibrium. He said, I know what thermal equilibrium is. Thermal equilibrium is the Boltzmann coefficient. A Boltzmann probability for an atom to be in the excited state. The probability to be in the excited state is just the Boltzmann factor and depends on temperature in the usual way. And he also knew that the spectrum of light would follow a plank distribution. And if you put those things together, you go beyond that. Because you are in thermal equilibrium. This here what we derived so far does not have thermal equilibrium. And thermal equilibrium only comes through the damping of spontaneous emission. So, therefore, by Einstein just using Boltzmann distribution and Planck law, he got spontaneous emission. And this is what I just want to show you. For most of you, it's a reminder. OK. Einstein's a and b coefficients. I will post one of Einstein's papers on the website. He was also the first to actually discuss mechanical forces of light. He realized that if you have a gas at a temperature which is different from the temperature in the womb, the gas has to equilibrate. And the gas can only equilibrate loose, excess velocity by transferring its momentum to the photons. So some equations of laser cooling, the fact that light can exchange momentum with a particle. And this is eventually what leads to equilibrium, was already in papers at the beginning of the 20th century. And it's just amazing if you read those papers. How modern the language is and how clear the language is. But here, I'm not talking about the mechanical effects. But the mechanical effects of light which many people in this class use for a living, this is actually part of this equation. Because the equilibrium between-- again, I discuss here Einstein's a and b coefficient-- the equilibrium between the electronic structure, the ground and excited state with a photon field. But Einstein also considered the equilibrium between the motional degree of the atom. And equilibrium between the motional degree of the atom and the radiation field requires the spontaneous force. The spontaneous radiation force. I'm not discussing it here. But I'm discussing here is now an equilibrium between ground state and excited state. So the probability to find an atom in the excited state is simply described by the Boltzmann factor. Now it's traditional in the discussion of Einstein's a and b coefficient to allow for degeneracy factors at ground and excited state. I have to say I usually hate that I try not to talk about levels. I just talk about quantum states. Non-degenerate individual quantum states. So in that sense, I try to characterize population in a quantum state, not in a level. But it is standard to follow Einstein's concept where we have degeneracies. I'm not emphasizing them here, but I will just write them down where they belong. OK, so this takes care. We know what is a fraction of atoms in the excited state. So this is the equilibrium. The next thing we need is the light. And Einstein assumed that it's a spectral density in a black-body cavity. So we need the energy density per frequency interval. And this is nothing else than the occupation number of the mode times the energy of the photon. Times the density of states. The photon number per mode is just given by the Bose-Einstein factor. Bose-Einstein statistics factor. The mode density is, as you know, in three dimension. Omega squared, pi square over c cube. So, therefore, the spectral density of black-body radiation has-- and we need that, and omega cube dependence. And then it has this Bose-Einstein denominator in the well-known form. So this is now Planck's black-body spectrum in the units where we need it. So all we need is now to find the famous Einstein a and b coefficient. We have to write down a rate equation for the atoms. So the fact is we know already what equilibrium is. Excited state versus ground state population is the Boltzmann factor. But now we write down a rate equation which involves a black-body field. And then we compare the solution of the rate equation to the solution we already know. And from that, we get Einstein's a and b coefficient. OK, so the change in the population of the excited state has three different terms. One is the energy density of the black-body radiation can cause stimulated emission. So, therefore, it's proportional to the number of atoms in the excited state. The energy density of the black-body radiation can cause absorption. This is proportional to the number of atoms in the ground state. And then this equation as it stands would lead to contradiction when I compare the solution of this equation to the Boltzmann factor we already know. And the only way to fix it is to add an extra term. Which is spontaneous emission. If spontaneous emission were not necessary, this a coefficient could in the end turn out to be 0. Or it can be undetermined. But as we see, it is necessary for consistency. So this is pretty much the famous rate equation. And we are interested in the equilibrium solution. In equilibrium, all derivatives vanish. And then by setting the derivatives to 0, I have one equation. And I will rewrite the equation by putting the spectral density of the light on one side. And everything else on the other side. And what we have here is the a coefficient. The ground state population, the excited state population. The b coefficient. So this is the spectral density of-- it's just an expression for the spectral density. We want to put in now that the excited state fraction is given by a Boltzmann factor. So, therefore, Ne over Ng becomes the Boltzmann factor. And, yes, there are these degeneracy factors. So I've pretty much divided the denominator and the numerator by the population in the excited state. And here I get Beg OK, so this is the result for the spectral density. But we know already that the spectral density has to be of the Planck form. So now we can simply compare what we know to what we obtained from the rate equation and make sure that it matches. So it's good we have the exponential factor. And by bringing this expression to the form of the other expression, we actually have to fulfill two conditions. One is in order to make sure that kind of the total expression is OK, it gives us a ratio. The Planck body spectrum is normalized. There is no unknown prefactor. So this determines the ratio of a and b. And also since we have this functional form of the Bose-Einstein statistics which has this exponential factor minus 1, it gives us also a relation between the 2B coefficient. That one is the b coefficient for stimulated emission. And the other one is the b coefficient for force absorption. OK. So with that, we have the relation between the a coefficient and the b coefficient. And we find that the B coefficient for absorption and emission are the same. Well, we know it's the same coupling matrix. I mean, the Hamiltonian which connects ground to excited state, excited state to ground state. But if you really want to deal with degenerate states and not formulated for states, you have degeneracy factors. OK So I could stop here. This is sort of the textbook result. But I want to rewrite the result that we recognize the quantization of the electromagnetic field. So instead of just looking at the power in Planck spectrum, spectral density, and such. I want to bring in the photon number. I've already given you the Bose-Einstein distribution for the photon number in the mode. So I take now equation a and multiply it with the average photon number in a mode of omega. This gives me on the left hand side-- I'm multiplying this with a photon number. So on the left hand side, I have a times the photon number. On the right hand side, when I put in the photon number, the photon number with this expression just give me the Planck distribution, the spectral energy. So yes. This gives me the spectral energy density times the b coefficient. Yes. And this is nothing else than stimulated emission. So we realize that stimulated emission is nothing else than n times the photon number. The photon number n times spontaneous emission. Similarly, we know that the rate for absorption becomes now, well, the same unless we have degeneracy factors. But just for the fundamental discussion, let's avoid the [? p ?] counting how many degenerate levels a level have. Let's just assume we have a situation that we just count every state individually. Then I can summarize this result saw in the following. That the total rate for emission was proportional to n for stimulated emission. And then we have the extra 1 for spontaneous emission. Whereas the rate for absorption was n times the spontaneous emission. So we find that this important formula that emission has an n plus 1 factor. Absorption has an n factor. And it is, of course, this extra plus 1 which was absolutely crucial to establish thermal equilibrium. If a had been 0, no thermal equilibrium would have been reached. So in other words, what is already in Einstein's treatment of the a and b coefficient is that if you understand absorption, which you can understand with the Schroedinger equation, and you understand and you write it in the fundamental way in photon numbers, then spontaneous emission is just the rate of absorption divided by n. Spontaneous emission is like induced emission in its rate. But by just one single photon. So as I pointed out, this is a result which is usually obtained with second quantization and it is already included in Einstein's a and b coefficient. So we could stop here. We have already a major result which is usually obtained in field quantization. But there is one deficiency and we want clearly fix it and move on to the microscopic derivation. And this is the following. Right now, we really assume black-body radiation. And this ratio n plus 1 over n was only derived for average photon numbers in a spectrally broad field. And what is left for microscopic treatment which I want to present now is even if you have just a single mode, the atom can only interact with a single mode. We find that stimulated emission and absorption is proportional to n, the number of photons already present. And then there is plus 1 for spontaneous emission. So in other words, we do it now sort of microscopically again. And what we get out of it is that everything we learned from Einstein's a and b coefficient is not just valid in thermal equilibrium. It's not just valid for average numbers. It's really valid for single mode physics. OK, so the agenda is what is next. Is valid. 4n. So this expression is valid not only for an average over many modes, but for each single mode. Questions about Einstein's a and b coefficient? OK. So we spend now the rest of today and parts of next Monday in a microscopic derivation of spontaneous emission using field quantization. But I just want to make you aware that we know already what it is. We have a semi-classical derivation of the b coefficient. And Einstein's treatment gives us the ratio of a and b. So we know already at this point what the rate of spontaneous emission is. But it is nice. I think also important for our education to obtain it in a microscopic way where we really show how we have to-- sum overall modes and such to obtain the expression. Also I want to ask you questions. I want to ask you clicker questions afterwards. And one clicker question for you is what happens to spontaneous emission in one and two dimension? Certain things will change. And it's much clearer what will change if you have a clear understanding how we sum up all of the modes. How all the possible modes contribute to spontaneous emission. And, of course, in two dimension and one dimension, you have a different density of modes. So with that motivation, we need a quantized electromagnetic field. Where we quantize the field for each mode. And then we go back, we do the summation of all modes. And we've really understood in the most fundamental and microscopic way how photons and light interact. OK, so our next chapter is the quantization of the radiation field. We do-- yes, [INAUDIBLE]? AUDIENCE: I just have a question. So when we compare the rate equation and the distribution of the photons, so there is a parameter, t, in both of them. So we just assume two t's are the same because of their reaching thermal equilibrium. PROFESSOR: Oh, yeah. Absolutely. I mean this is, of course, what Einstein assumed that the thermal equilibrium for atoms with the Boltzmann factor. And the thermal equilibrium for photons described at the Planck distribution have to be reached at the same temperature. It was a thermodynamical argument assuming which is, of course, one of the tenets of statistical physics of thermodynamics. If you have two systems and they interact with each other, they equilibrate at the same temperature. Yes, this is very important. This was a very important assumption. Of course, as we know, when we have ultra cold atoms in a room temperature vacuum chamber, the atoms do not equilibrate. But if you would trap them for an infinite amount of time, they would equilibrate. It's just that we lose the atoms from our trap. They're knocked out by [INAUDIBLE] gas collisions. A lot of other things happen. But if you could isolate just ultra cold atoms in a trap, they would stay in this trap forever. There would be no other effect shortening our observation time. Eventually, the atoms would just boil out of your trap because black-body radiation. Momentum transfer from black-body photons heats the atoms up to room temperature. And this is, of course, one of the things which really amaze people when laser cooling came along. You know, everything at low temperature was cryogenic. If you want to keep a sample cold, you had to put liquid nitrogen shields, helium shields. You had to put multiple shields around-- if you had an optical [INAUDIBLE], you had a window of liquid nitrogen temperature. One window at helium temperature. Just to make sure that the black-body radiation is absorbed and blocked. Because it would've been absolutely detrimental if you had a sample at very low temperature and it would've been exposed to black-body radiation. So it's really a unique feature of the atoms that they are, and this is what you will calculate in this week's problem set, that the atoms are almost completely transparent to the black-body radiation. They only react if the hyperfine frequency or they react far, far, far, far, far off in the tail of the black-body radiation with an electronic transition. But nevertheless, as Einstein has taught us, and as we know from general principles, this will not mean that the atoms stay cold and are decoupled. It just means that it takes maybe the age of the universe. I've never calculated the number. It would really take forever until the atoms in the atom trap reach the ambient temperature. So Einstein's argument was an idealized argument which in practice would never happen. But if you exclude all other processes, you have a consistent system by saying I only have atoms with their kinetic energy. I have black-body radiation. And everything has to equilibrate. And as I said before, the argument for Einstein's a and b coefficient simply assumes that the ground and excited state population equilibrium. But you can carry the argument even further and say even the Maxwell-Boltzmann velocity distribution of the atoms has to equilibrate at the ambient temperature. Beautiful argument. And what you find from this argument is it's really amazing. You find the photon recall is h-bar k. Einstein pulled it out simply by making this assumption. I will post the paper on that. OK, field quantization. We discuss the quantization of the electromagnetic field really from first principles. From vector potential, radiation field, Coulomb gauge, transverse vector potential, in 8422. So we dedicate one or two classes to just discuss all the steps to have full quantization of the electromagnetic field with all the bells and whistles. So sometimes when I teach this course, I say, well, you've heard about field quantization, I can refer to that. Or I can refer you to 8422. But in the end I thought, why don't I just give you a 10 minute derivation. Just sort of focusing on the essential because this makes this cause more self-contained and more complete. So I give you now a ten minute quantization of the electromagnetic field. Pretty much going straight to showing you electromagnetic field isn't harmonic oscillator. And now let's use the quantum description of the harmonic oscillator. And then we have a quantum description of the electromagnetic field. So this is not rigorous, but it is logically compete. So we focus in the discussion of the quantization of the electromagnetic field, we focus just on a single mode of the electromagnetic field. Each mode will be an harmonic oscillator. And then we have many harmonic oscillators comprising the electromagnetic field. So a even single mode, we assume that we have plain waves with a polarization, with an amplitude. The electric field is the derivative of the vector potential. And the shortest way to show you an analogy with the harmonic oscillator is to remind you that the total energy-- which is actually if you're wondering about a factor of 2, the electric and the magnetic part, the total energy is quadratic in the amplitude of the vector potential. By the way, there is a factor of 1/2 because if you have a sinusoidal variation, you take the time average, cosine squared, which is 1/2. Well, if the total energy is quadratic in the amplitude, this immediately allows us to draw analogies to an harmonic oscillator. And we can use the vector potential of the single mode of the electromagnetic field to define two quantities, q and p. Let me write it in that way. Omega q, plus ip, is related to a in the following way. And yes, I was just ranting about this. v is the volume. We assume everything happens in a finite volume of space. Value would say I have two new quantities, q and p. So I need two equations. And the two equations involved a and a complex conjugate. So now we had an expression for the total energy in terms of the amplitude of the vector potential. So now I can rewrite it the amplitude square of the vector potential is a times a star. And with that I get the total energy to be proportional to q square plus p square. And that should remind you, and if everything was set up to remind you, that this looks like a harmonic oscillator where q is the position variable. And p is the momentum variable. So now, I mean all this is classical. All this is just clever definitions. But now we have to do a leap to quantum physics. We cannot logically derive it. We have to make a leap. And the leap is that we postulate that this should now be described as a quantum harmonic oscillator. And this transition is done by simply postulating that the two quantities we have defined fulfill the canonical commutator for position and momentum. So we've started with the vector potential, expressed the energy as a vector potential, and now we say we recognize through those definitions that this is an harmonic oscillator with variables q and p which are defined in terms of the vector potential. So then you know if you have the quantized harmonic oscillator, you immediately introduce creation and annihilation operators. Which are linear superpositions of q and p in the following form. And a dagger has a minus sign here. And all the prefactors were cleverly set up in such a way that the commutator of a and a dagger is 1. And now we can do all the substitutions. We can express p and q by a and a dagger. But p and q were related to the vector potential, a. And the vector potential a square defined the energy. So now we have an expression for the energy which is no longer involving a or p and q. It involves a and a dagger. And surprise, surprise, we find that our total energy because we have operators now has become a Hamiltonian, Has this well-known result with the photon number operator a dagger, a plus 1/2. So this is sort of the quickest way which takes us in a few minutes to the quantized electromagnetic field. Of course, all I need is to come back to spontaneous emission stimulated emission are the matrix elements of this operator's a and a dagger. And this is where, of course, stimulated and spontaneous emission-- all that comes in. The non-vanishing matrix elements in this description of the electromagnetic field are the ones where a annihilates a photon and the matrix element is square root n. Or where a creates a photon, adds a photon to n photons already present. And then the matrix element is n plus 1. OK, so we went from a to q. And p. And we went to a and a dagger. But a is also related to the electric field by taking the time derivative of the vector potential. So now, of course, we can go from our expressions of a and a dagger all the way back. Just substitute, substitute, substitute. And find an expression for the electric field in terms of a and a dagger. The result is that we have a and here we have a dagger. We have a polarization vector. We have the plain wave vector. And the complex conjugate. And the complex conjugate of a is a dagger so the electric field is a superposition of a and a dagger. The electric field becomes an operator which is the sum of creation-annihilation operator. So with that, we can go back to our Hamiltonian. Our Hamiltonian for the interaction between light and atoms in the simplest possible case was the dipole Hamiltonian. Which involves a dipole matrix element. The charge of the electron is negative. That's why the minus sign has disappeared. And now all we do is from our treatment before in the Schroedinger equation where the electric field was an external field. Now the electric field becomes the operator acting on the quantum state of the electromagnetic field. So by the way, this prefactor here is because the rest of it is just dimensionless. This prefactor has, so we have the matrix element here, this prefactor is an electric field. And it's something you should always know. This electric field is actually the electric field of a single photon. This is the correct normalization. If you want to factor out the volume, the frequency, and all that, you combine these factors in such a way that it's electric field of a single photon. Then we have the dipole moment. And then we have an expression with creation and annihilation operator over here. I assume now that the atom sits at r equals 0. So why should I carry forward an e to the ikr term. We conveniently place the atoms at r equals 0. But I have to say a word or two about the e to the i omega t factor. I have been deliberately cavalier about my formulation in quantum physics, of quantum mechanics, whether I use the Schroedinger or the Heisenberg picture. And you know in the Schroedinger picture the wave function is time dependent, not the operator. In the Heisenberg picture, it's the other way around. And I have to tell you every time I do a calculation and look at it, I'm getting confused about the two pictures. So anyway, trust me that in this case when I want to discuss the Schroedinger picture, the time dependent factor should not be present. But you really have to look at the derivation and carefully realize the two are connected with a unitary transformation. You really have to figure out in which [INAUDIBLE] presentation you are. But I want to not focus on the formality here. But I'm not carrying forward this factor because I want to discuss the Schroedinger picture. OK. Yes. So now we can look at the matrix elements of our interaction Hamiltonian. And just to be clear, we have written down this Hamiltonian for just a single mode of the radiation field. Depending on what we are interested in, we may have to sum over many, many modes. So we are looking at transitions from an initial state which may be an excited state. To a final state which may be a ground state. And since we have quantized the magnetic fields, we also have to specify the state of the quantum field. And we assume that the uncoupled Hamiltonian, of course, has simply number states as eigenstates [INAUDIBLE] photons and prime photons. So the only non-vanishing matrix elements are the following. e is a charge. e hat is the polarization. Epsilon 1 is the electric field of a single photon. And, of course, we only have a coupling by the fully quantized Hamiltonian when we have a dipole matrix element connecting state a and b. I mean these are all sort of things we have already discussed in another context. But now the a's and a daggers which only act on the photon field, give rise value to two couplings. One is absorption and one is emission. Absorption takes place when we look at the matrix element when the final state has one more photon. And emission takes the other way around. When the final state has one more, which way do we go? Let me just write it down. And then read it off. I think I've inverted, but anyway, initial and final state can differ by plus one photon or minus one photon. In one case it's absorption, the other case it's emission. And the matrix element is n or n plus 1. So one is absorption. And one is emission. So finally if we ask, what are the rates of absorption and emission when we assume we have a situation where-- and we have now discussed the matrix element and this matrix element could become the basis of Fermi's Golden Rule. We just have to specify time dependent perturbation theory. But in any case, whatever we do when we talk about the rate, it will involve the matrix elements squared. So now we can ask what happens when we couple ground and excited states. And let's assume we have an excited state. And we sum over all possible photon occupation numbers of the ground state. Well, when we go from the excited state to the ground state, there will be only one term contributing to the sum. Where we have one photon more because it has been emitted. So, therefore, because of the square root n and n plus 1 dependence of the matrix element, we find that for the processes where photon is emitted, where the atomic system gives away a photon, the sum of all the possible rates becomes simply n plus 1. And in the case of absorption it becomes n. So in other words, we have now done the field quantization what Einstein pulled out of a thermodynamic equilibrium argument. Namely that if you have a system that the rate of emission versus the rate of absorption is n plus 1 over n. But we did not assume any spectral distribution. We know this n plus 1 over n applies to every single mode of t electromagnetic field. Questions about that? I also want to tell you, just as a side remark, a lot of people think that when emission is n plus 1, the plus 1 is different from n. That this plus 1 is sort of a spontaneously emitted photon which has maybe some random phase. And the n which is stimulated photons, they go in the same mode as they joined sort of the identical to the photons already present. I don't see any of that in that treatment. So a spontaneously emitted photon is identical to the photon which would be emitted in a stimulated way. You just have n plus 1. This is the matrix element for coupling to this mode. At some point, spontaneous emission can happen in many modes. And if it goes to many modes, then there is some integral or some summation involved. And this can cause a certain randomness. But at the level of a single mode, I do not see any difference between the one photon and the n photons at this level of discussion. Just keep that in mind. And actually, we'll discuss micromasers. You can have put an excited atom in the cavity. And you have a fully reversible exchange. You spontaneously emit, you absorb. You spontaneously emit, you absorb. You have Rabi oscillations which involve a single photon. And they involve spontaneous emission. Fully reversible. Completely [INAUDIBLE] evolution. So we have 10 minutes left. Yes, I think this is just enough to derive for you. Now to derive for you using the fully quantized picture. To derive flow from first principles microscopically an expression for Einstein's a coefficient. So in other words, what I'm doing now is I really directly calculate for you the rate of spontaneous emission. And I'm not getting it through the back door by treating absorption and then saying, well, there's n and n plus 1. Or borrowing some argument from Einstein. It's such an important quantity, we should just hit the system with a Hamiltonian and out comes a spontaneous emission rate. And this is what we're doing. So the starting point is what we have discussed at the beginning of the class. We want to discuss Fermi's Golden Rule. We want to use the rate. And to remind you the rate for process is the matrix element squared by h plus square. And then we have to multiply with the density of states. So this is the density of states. Pair polarization. Actually, I made a few corrections to my notes because I realize I have be very, very careful in telling you what the states are. Because this is what this exercise is about. And we are writing it down for one mode by mode. So the density of state is now pair polarization. We take care of polarizations later. Per unit frequency interval. Yes. So this rate, but now I have to add one caveat. I was just thinking how I should express it. This rate, if it's all of spontaneous emission, is the Einstein a coefficient. But there is one caveat. And this is the emission of an atom which has a dipole moment is not isotropic. So I have to be a little bit more careful with the solid angle. I cannot just calculate a rate and assume everything is isotopic. If I would do that, I would save a few minutes. But I would have really swept something under the rug. So what I'm calculating first is the rate in a given unit angle. And then I do an integration over the unit angle. And eventually I will integrate over the dipole [INAUDIBLE]. So, therefore, the density of the photon states is sort of photons with their k vectors go into all space, but I wanted to have the density of states per unit angle. And this quantity is omega square, 8 pi cube, c cube, times v. And, of course, if you multiply this by 4 pi, you get your normal density of states. Because the density of states is isotropic. But the rate which we calculate will not be isotopic because of the dipole matrix element. And the dipole pattern. So, therefore, we start with a differential formulation. Spontaneous emission per solid angle. And then when we do, when we integrate over the solid angle, we have to take care of a sine square factor because of the dipole pattern. Good. Now Fermi's Golden Rule takes us from an excited state to ground state. And since we use the fully quantized treatment, our product states of atomic states and photon states. And so we assume we start with an atom in the excited states. And all modes, mode 1, mode 2, mode 3, are empty. And the final state, well, one photon is emitted and it can appear in any of the modes. And we have to do an integral of all possibilities. Good. So we did all the work with quantizing the electromagnetic field. Because we want to calculate those matrix elements. Let me just carry over the prefactors, the electric field of a single photon. Here we have the dipole moment between the polarization and the atomic dipole matrix element. And now since we are talking about an emission problem, we have from the matrix elements squared as we just discussed, an n plus 1 factor. But the population we start with 0 photons. So, therefore, it's just one. So all the work we did on quantization of the electromagnetic field is that even without any photon present, we have a coupling. You can say it's a coupling caused by the vacuum which is like the coupling we would have gotten if we have exactly one photon per mode. So let me just write that out. Yes. I think we can finish that. OK, we have now taken care of the matrix element. So we insert the matrix element now in our Fermi's Golden Rule expression for the ad omega. Let me just keep track of all the factors. [INAUDIBLE] omega in the matrix element. This comes from the electric field of a single photon. We have a matrix element. Dipole matrix element times polarization. And the density of states gives us omega square. So see already we'll get a spontaneous emission omega cube expression. One omega comes because the electric field of a single photon, the electric field squared of a single photon is proportional to omega. And an omega square factor comes from the density of states. It's really important to keep that apart. The omega 3 dependence has two different sources. It's always nice to see that we assumed an [INAUDIBLE] volume and it cancels out. OK. Let me just write it down and then we do the final step. So if everything were isotropic, I could just multiply with 4 pi. And the last factor would be dropped. But if you want to go from the spontaneous emission per solid angle to the total spontaneous emission, we have to average it. And what has angular factors is actually the projection between the atomic dipole moment and the polarization. So this is the relevant term. And now we have to distinguish. There are two polarizations. One polarization, the polarization when we have a dipole moment. And you have light which is polarized in such a way that the light goes here. This is a dipole moment, the light goes here. And now the light which propagates here can have a polarization like this. Which has a projection of sine theta, with the dipole moment. And if the light goes there and the polarization is like this, it's orthogonal to the dipole moment. So the scalar product is 0. So for one polarization, we have a sine theta factor. For the second polarization, the scalar product for the dipole moment is 0. So, therefore, and that's the last conclusion I want to draw today. Is this integration over the solid angle boils down to that we can pull the matrix element out of the integral. And what is left is the projection factor, sine square theta. We have to integrate over the whole solid angle. And this gives us 2/3. So, therefore, our final result is that the microscopic expression for the a coefficient has its factor of 3 in the denominator. And this factor of 3 only comes because I correct the average over the dipole pattern. Well, then we have 4 pi epsilon 0. I mentioned the important dependence on the frequency cubed. OK, so this is our final result for today. And I will discuss next week what are its units? How big is it? What is the quality factor of the atomic oscillator? But we can start next week with this result. All right.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
10_Atoms_in_External_Fields_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's get started. Last class, which also means last week, we discussed what happens when atom are exposed to external fields. Well, you would say, isn't it enough if you understand atoms in isolation? Well, not quite. Because whenever we want to talk to the atoms, whenever we want to manipulate them or find out in what states they are, we have to apply external fields. The way how we communicate with atoms is through electric, magnetic, and electromagnetic fields. And therefore, we have to understand what happens to the structure of atoms when we expose them to such fields. We started out with structure in magnetic fields. And if I just show you this picture, this is what we discussed last week. However, I noticed that our discussion with the different coupling cases-- fine structure plus magnetic field, hyperfine structure, strong fields, weak fields. I noticed that when I was teaching, it's a lot of details and it looks a little bit messy. So what I want to do, therefore at the beginning of today, is I want to give you sort of a summary that you see the bigger picture. That you see beyond the details, that what I actually taught you about atoms in magnetic field is some paradigmatic example of quantum physics. What happens if you have two different terms in Hamiltonian and you have to interpolate between one and the other? But before I do that, do you have any questions about magnetic fields, magnetic structure? Well, then let's try to summarize as follows. What we have is we have a Hamiltonian. And it has one part, the hyperfine interaction, which depends on I dot J. And then it has an external magnetic field part. And what couples to the magnetic field, which we assume is in the J-direction-- in the z-direction are the z-components of the magnetic moment. And the z-component of the magnetic moment are proportional to the mJ or mI quantum number, to the magnetic quantum number, of the atom and the nucleus. So in a weak field, it is the hyperfine structure which dominates. So in a weak field, we first solve for the hyperfine structure. And then we use the eigenfunction of the hyperfine structure. And the eigenfunction of the hyperfine structure have the quantum number F where J and I have coupled to F. And then we treat the magnetic Zeeman Hamiltonian perturbatively. And that led us to the formulation of the Lande g-factor, gF. The other case is the strong field case where the magnetic field dominates. Then, we simply solve for the hyperfine structure in the magnetic field. It's one of those rules in quantum physics, or in physics, or maybe even in life, first things first. You should first take care of the big things. And this is now the magnetic field. And since the magnetic field Hamiltonian is diagonalized when we have eigenfunctions where mJ and mI are good quantum numbers, this is sort of-- if you ignore the hyperfine coupling, this is the exact [INAUDIBLE] of the Zeeman term. And then in perturbation theory, we look for the hyperfine coupling. And well, we do perturbation theory in eigenfunctions with mI and mJ. And that means if you have the I dot J term, it is only the component mI mJ which remains. So I've given you those two cases. Now, what you should also learn here in this example is the language which we use. And sometimes, I would say, the language can be more confusing than the equations. What we say here is we say that the angular momentum of the electron and of the nucleus are coupled to the magnetic field axis. They are quantized. The approximate eigenstates are those which have a specific quantum number in the z-direction because the magnetic field points to the z-direction. So we're saying I and J are strongly coupled to the z-axis by the magnetic field. And then we treat the coupling of I and J with each other in perturbation theory. Whereas in the previous case, we say I and J strongly couple. And when I and J strongly couple, F becomes a good quantum number. And that means I and J both precess around the axis of the total angular momentum f. And therefore, we say I and J couple to F. And then we solve for this coupling of F to the magnetic field in a second step. But I hope you see that there's two limiting cases. We can exactly diagonalize one term, and then we perturbatively add on the result for the second term. Of course, in the age of computers I could have simply written down for you a Hamiltonian and said, well, it has to be numerically diagonalized. What I discussed instead were the two limiting cases. Now, this discussion now allows me to discuss what happens when we go to even stronger fields. Well, when we go to even stronger fields, then we may have fields which are even stronger than the fine structure coupling, the coupling of the orbital angular momentum of the electron and the spin angular momentum to J. And well, without even any deviation, which is obvious, you know what happens now is that each component which provides magnetic moment-- the spin, the orbital angular momentum, and the nucleus-- the dominant term for each of them is the coupling to the magnetic field. So in strong magnetic fields, these are the eigenstates. The eigenstates are labeled by mI, mL, mS. So we have taken care of the strong coupling term. And now in addition, we are now treating in perturbation theory some fine structure coupling, but the quantum numbers are already distributed, mL, mS. There is a coupling become between mI mS and a coupling between mI and mL. So this is sort of the limiting cases. But as a general illustration of quantum mechanics, I thought this was a nice example for a Hamiltonian where we have different scalar products, like B times S, B dot L, S dot L, I dot J. And the question is, how do we take care of those different parts because they do not commute? Of course, the theorist would just say, I simply diagonalize it and that's it. But if you want to develop intuition, then you have to discuss the limiting cases. And in particular, the approach which allows an intuitive understanding is first things first. And we first treat the stronger terms and then the weaker terms. And we can quantitatively derive, analytically derive expressions, for instance, for the Lande g-factor in this vector model. This vector model assumes, so to speak, that a state which has an eigenfunction of mJ rapidly precesses around the z-axis. And this vector model actually allows you to do easy calculation without Clebsch-Gordan coefficient. So the concept of the vector model is rapid precession for transverse components and projecting of vectors onto the axis around which you have rapid precession. But this is simply a tool to do calculations without the explicit use of Clebsch-Gordan coefficients. OK, so this is what I wanted to tell you about atoms in magnetic field. Any questions? OK then, we can actually move onto atoms in electric field. But before we do that, we should have some clicker questions about atomic structure and atoms comes in external magnetic fields. So get out the clickers. So the first questions take us back to electronic structure. It's a question about, how do wave functions, how does density, and how does inverse size scale with principal quantum number? So the first question is, how does 1/r, 1 over the size of the electronic wave function, how does it scale with n? 1/n, 1 over n squared, 1 over n cubed. OK. OK, yes. It's 1 over n squared. But I would have hope that 100% of you would know it because 1 over is the Coulomb energy. The Coulomb energy is 1/2 of the total binding energy because of the Virial theorem. So when you see 1/r, there should be a flash in your head which says energy. And Rydberg energy is 1 over n squared. The energy levels of hydrogen are 1 over n squared. OK. Yeah, next question. How does psi 0 of 0 square, how does the density of the electron at the origin scale with principal quantum number? 1/n, 1 over n squared, 1 over n cubed, or 1 over n to the sixth? OK, yes, very good. So what I try to remind you here is that there are two different radii. There is one radius which scales with n squared. But if you calculate the density at the origin, if you would say, well, r scales with n squared. The volume scales with n to the sixth. And then you would say the density scales with n to the minus 6, you're wrong. And we had a discussion that there are two different lengths case in the hydrogen atom and in hydrogenic wave function. One scales with n squared and the other ones scales with n. And therefore, the density at the origin is n to the minus 3. Good. Next question, from hydrogen to helium. In helium, for the same electronic configuration, , are singlet or triplet states more tightly bound? So we talked about a shift or splitting between singlet and triplet states. Which way does it go? You want to try again? For the same electronic configuration-- the ground state has only one configuration. It has only one state in the ground state. But now we go to the excited state, to the excited states, and there are a number of excited states because they have the same configuration, but they can be classified by singlet and triplet states. OK, we are converging. It is the triplet state. Some people are confused when they think about molecules. usually, in molecules the singlet state is more tightly bound than the triplet state. But the magic word is for the same electronic configuration. You can have one orbital filled with two electrons only in a singlet state because of the Pauli exclusion principle. It is only in the first excited state or in an excited state of a molecule or of the helium atom that you have two orbitals, 1s and 2s. And you can now put the electrons in with the same spin or with opposite spin. So usually, it's only in an excited state that the question singlet versus triplet arises. And then in the excited state manifold, the triplet state is lower because it has a symmetric spin wave function and anti-symmetric spatial wave function. OK, the next question. OK, so we understand now there's a difference between triplet and singlet state in excited states for the same electronic configuration. And the question is, what is the origin of the energy which is splitting the singlet from the triplet state? Magnetic energy, spin-spin interactions, or electrostatic interactions? Yes, the Coulomb interaction is electrostatic interactions. We discussed the singlet-triplet splitting and the structure of helium without any magnetic or spin-dependent interaction. All we had is the Coulomb interaction. And in the triplet state, which is the symmetric spin state, the spatial wave function has to be anti-symmetric. In the singlet state, the spatial wave function has to be symmetric. And the symmetric and the anti-symmetric spatial wave function have a different Coulomb energy. So the spin through the anti-symmetry through the Pauli exclusion principle determines the symmetry of the electronic wave function. And it is then purely the Coulomb energy. That's why the singlet-triplet splitting is so big. Because it's not magnetic, it's Coulomb in origin. OK, next question. Which interaction reflects-- oops, maybe you want to still read it. Which interaction reflects that the potential between nucleus and electron is not exactly a Coulomb potential 1/r? We've usually discussed Schrodinger equation, hydrogen, Bohr model for an exact 1/r potential. But then we discussed a lot of phenomena. And I want you to figure out now, which of those choices mean, in essence that you do not have a 1/r potential? OK, we have three choices. So the volume isotrope effect, I think, is a no-brainer, is trivial. It means explicitly that the nucleus is not a point, has an extended volume, and that means inside the nucleus the electron is not experiencing a 1/r potential. So it's clear that C is always correct. The Lamb shift is actually causing a deviation from a 1/r potential because-- well, both the vacuum polarization and the-- well, you can go ahead and say, the fact that we have QED, that we have other modes of the electromagnetic field mean that there's a deviation from the 1/r potential. The interesting question is the Darwin term. And the people who clicked D included the Darwin term. That's a little bit trickier because I explained the Darwin term as Zitterbewegung, as this trembling motion of the electron which smears out the 1/r potential. So you would think coming from the non-relativistic Schrodinger equation, that there is an effect which is smearing out the 1/r potential. On the other hand, the Zitterbewegung, the Darwin term, is just one term which is included in the Dirac equation. And the Dirac equation, which includes fine structure and relativistic energy corrections and the Darwin term is an exact relativistic formulation of the 1/r potential. So in other words, I would say the correct answer is E. The people who included the Darwin term, I would say the Darwin term is not a deviation of the 1/r potential because it's simply a way to explain what is the result of the Dirac equation. The Dirac equation uses exactly the 1/r potential without any corrections. So you can say that if you want to understand the relativistic solution to the 1/r problem, you include a term which in the non-relativistic equation slightly changes the Coulomb potential. Questions about that? OK. Fine structure. The fine structure affects only states with L equals 0 through a coupling term L dot S. Is this statement, the way how it is written, true of false? I would say it's false because the fine structure has three contributions-- the Darwin term, the relativistic kinetic energy contribution, and this L dot S term. And it effects all states, also the S states, through the Darwin term and the relativistic energy contribution. So the fine structure is more than just an L dot S term. Next question is for L equals 0, the orbiting electron creates a magnetic field. And spin orbit interaction can be simply regarded as the energy of the electron's spin in this magnetic field. Would you say that this sentence is true or false? I thought it's true, but maybe people want to tell me what is false about the statement? Maybe the first sentence people did-- tell me, the orbiting electron creates a magnetic field. Yes. AUDIENCE: I said false for this because I normally would picture that from the electron's frame of reference, the nucleus creating a magnetic field is the magnetic field [INAUDIBLE]. PROFESSOR: OK, the second part, that there is a magnetic field and it's been orbiting the action is the energy of the electron spin in this magnetic field is probably generally accepted. But the first question is, does the orbiting electron create the magnetic field? Well, we have the two options. We can say the electron moves and in its own frame there is a v cross e term. And therefore, magnetic field. So we can say that the electron's motion creates a magnetic field in its own frame for the relativistic transformation. So in that sense, it is correct. but I would also side with you that there is an alternative view of saying in the electron's flame, the nucleus rotates around the electron. And it's the nucleus which creates the magnetic field. In the end, it's the relative motion between the two. Well, isn't it a good thing that we are not giving scores on that? So yes, if you want, you everybody can feel that you have given the right answer. Oh, yeah. In Dirac-- that should be easy, but it's just a warm-up question for the next one-- which states are degenerate in Dirac theory? And you have a few choices. That should be easy. Yeah, it's the S 1/2 and P 1/2. Dirac theory does not lift the degeneracy between states with the same J, 1/2 and 1/2. But between 1/2 and 3/2 states, there is, actually, the fine structure splitting, which we've just discussed. OK, the next question is, what effects lift now the degeneracy between the 2 S 1/2 and the 2 P 1/2 term? OK, we have three candidates-- the Lamb shift-- well, the Lamb shift is famous and the Lamb shift was discovered because it splits the degeneracy between the two. QED corrections have different effects on S 1/2 and P 1/2. The size of the proton does also shift it, because the size of the proton-- the volume effect is more important for S states than for P states. Maybe the question is, does the mass of the proton lift the degeneracy? No, it doesn't. It would just mean if your nucleus has a finite mass, you simply have a two-body problem with a reduced mass, which is different from the bare mass of the electron. But nothing else is changed, no degeneracies. It's as if the electron has a different mass, which is the effective mass. So the correct answer here is D. OK, four more questions. And this is about hyperfine structure. So the question is, what-- well, hydrogen in the ground state has four states. Because the electron has a spin up and down and the proton has a spin up and down. And 2 times 2 is 4. So we're talking about multiplicity of 4. And I'm asking you now about the limits of high and low field. First at high fields, then at low fields. And the question is, what are the magnetic moments of those hyperfine states? And we neglect the nuclear magneton compared to the Bohr magneton. So what are t magnetic moments of those hyperfine states in units of the Bohr magneton? Oh, yeah. What happens at high magnetic fields? Remember, at high magnetic fields, this is actually the simpler case. Often, you think the low magnetic field is simpler because it connects more with the isolated atom. But you should take away the message that high magnetic fields are simple. Because in high magnetic field, each spin couplets to the magnetic field by itself because the coupling to the strong magnetic field-- that's the definition of a strong magnetic field-- is stronger than the coupling of the two spins with each other. So the problem I'm giving you is that you have an electron spin which can be up and down and it couples to the magnetic field. And then we have the nucleus spin, but the magnetic moment of the nucleus is so small that we neglect it. So what are the possibilities now? Well, we have four states of hydrogen at high magnetic field. Two have the electron spin up, nucleus spin up/down. Two have the electron spin down. And then when the nucleus spin is up or down in those two states. So all the states at high magnetic fields have either the electron spin up or the electron spin down. So therefore, the correct answer is A. We have two states where the electron spin is up and two states where the electron spin is down. It's just a complicated way of asking you, what are the possible energy levels of an electron in a magnetic field? And the answer is, well, plus-minus 1 Bohr magneton times the magnetic field. Questions about it? OK. Now, we go to the more complicated case, to low magnetic fields. And again, same question. What are the magnetic moments of those hyperfine states? So you have four states. The number of states, of course, doesn't change. That's the dimension of our Hilbert space. But now we are at low magnetic field, and what is the magnetic moment, which is nothing else than the derivative of the energy with respect to the magnetic field? Yes, the correct answer is B. We have two manifolds, one is F equals 1, where one slope is 0 and one slope is plus-minus 1. And then we have an F equals 0 state. So it is 1, minus 1, and 0, 0. OK. Let's now make it more interesting. Let's replace the proton by a positron, the anti-particle of the electron. So now we have a similar situation, but what happens now is, of course, the contribution to the magnetic moment form the nucleus, which is now the positron, is as important as the contribution of the electron. So you have two spin 1/2's coupled now. One is positive, one is negative. And you should figure out again, what are the energies? But before we talk about the energies, let's first talk about, how many hyperfine states do we have in the ground state-- 1, 2, 3, or 4? Yes, we have four states because we have two particles-- positron, electron. Each of them has spin up, spin down. 2 times 2 is 4. And therefore, we have now four states. And the question is again, at high and low magnetic fields, what are the magnetic moments of those states? So we have four states-- spin up, spin down-- of the electron and the positron. And the question is, what are the magnetic moments of those hyperfine states? D is correct. We have 1/2, spin 1/2. If the two couple like up-up and down-down, we have the maximum spin. But since one particle is positive, one is negative, when the spins are aligned, the angular momenta are anti-aligned. And therefore, the magnetic moment is 0. So when they couple parallel, the magnetic moment is 0. When they couple anti-parallel, the two magnetic moments of one Bohr magneton each add up, and we have either 2 or minus 2 as the magnetic moment. Any questions? Then finally, the last question. Same situation, positronium, but now at low magnetic fields. What are the magnetic moments of the four hyperfine states of positronium at low magnetic field? All right. Let's discuss it. What is the structure of the ground state at low magnetic field? What is the good quantum number at low magnetic field? AUDIENCE: F. PROFESSOR: F. It's hydrogen. It's like hydrogen. 1/2 and 1/2. If we have an S of 1/2 of the electron, the I of the positron is also 1/2. And 1/2 and 1/2 couple to F. And what are the values for F? F equals 1 and F equals 0. OK, what is the magnetic moment of the F equals 1 state? In order to get F equals 1 out of 1/2 and 1/2, you have to align the spin of the electron with the positron. So the F equals 1/2 state is the state where the two spins are aligned. What is the magnetic moment or this state? AUDIENCE: 0. PROFESSOR: 0. How many states are in the F equals 1 manifold? What's a multiplicity of F equals 1? 3. Plus, minus 1 and 0. So we have an F equals 1 state which has angular momentum but no magnetic moment, and it has a multiplicity of 3. So three states have 0, 0 magnetic moment. In other words, you would expect an F equals 1 state to have this kind of Zeeman structure. But because of the special situation in positronium, the Zeeman structure is like this. There is no linear effect. It's a quadratic effect. All three states start out with 0 slope because as long as the spins couple to F equals 1 and we don't have a magnetic field messing up with the coupling, the magnetic moment is 0. OK, now what happens in the fourth state, which is F equals 0? In the fourth state, which is F equals 0, the two spins couple in an anti-parallel way. So now, what is the magnetic moment when the two spins couple in an anti-parallel way? The spins subtract. But because of the different charge, plus and minus, the magnetic moments would add up. That's what we just discussed in the high field case. So you would think the F equals 0 state has a magnetic moment. But in an F equals 0 state, it cannot point anywhere because the angular momentum is 0. And therefore, in a most trivial way, this is hydrogen and this is positronium. So positronium has four hyperfine states. And the slope of all four, for the reasons discussed, is all 0. So sorry, A is the correct answer, without any ambiguity this time. OK. Any questions? OK, then let's talk about atoms in electric field. We start out in-- we put the atoms in a uniform electric field. Again, we assume that it points in the z-direction and its magnitude is epsilon. And we want to ask, what is the electrostatic energy in this electric field? And we are using the fact that electrostatic energy can be expanded in a multi-pole expansion. We have a monopole term, we have a dipole term, and we have a quadratic term. So the charge, of course, is-- the atom, itself, is a neutral atom. So there is no monopole term. The linear term in the electric field would correspond to a permanent dipole moment. And I will remind you in a moment that this is 0. And then the term which provides us with a stark effect, with the energy shift of atoms in electric field will be the third term here, which is characterized by the polarizability alpha. And it corresponds to an induced dipole moment. That there is an induced dipole moment, which is alpha times epsilon. And then the induced dipole moment interacts with the electric field. And that gives then, epsilon times epsilon-- epsilon squared. So this would be a classical multi-pole expansion. And we will now derive results quantum mechanically. The perturbation operator for us is the dipole operator. And that could, in principle, include a permanent or an induced dipole moment. So it would take care of the second and third term, the dipole operator and its projection on the z-axis. So the dipole operator is the charge of the electron times the position. And as long as the polarizability and the situation is isotropic. A minus sign. Minus E is the charge. If you apply an electric field in the z-direction, all the relevant dipole moments are in the z-direction. For anisotropic materials, you could have an electric field in the z-direction and the dipole moment points at an angle, but we do not have such a situation for our atoms. OK, so the operator is then simply charge of the electron, the z-coordinate times the electric field. And this has o parity. And that leads us immediate to the result when we have an atom in an eigenstate n and we ask, what is the expectation value of H prime? It is 0 because of parity. So the answer is, we have no permanent dipole moment until we have degenerate energy levels. If n is a non-degenerate level, this matrix element is 0 by the parity selection rule. OK, now we want to do perturbation theory. So our perturbation operator is this. And since we have the clickers, I just want to ask you two quick questions. I will do the perturbation theory and I will explain everything, but maybe you want to predict the result, which I want to derive in the next 10 minutes. And the question is, what will we actually get for the expectation value of H prime? Will we get the expectation value of the dipole operator times the electric field or will we get the expectation value of the dipole operator times the electric field over 2? And the next question would be the same, but what do we get for the total Hamiltonian? So these are the questions. I want to discuss with you in the next 10 minutes, simply using perturbation theory, expectation values. Expectation values of the total energy each, not plus H prime. But also, expectation values of the electrostatic energy, which is H prime. And the question is-- I mean, you can say for dimensionless units, what we get is a dipole moment times an electric field. And this is one of the situation where factors of 1/2 are not just bookkeeping. Factors of 1/2 really reflect interesting physics. And I want to sort of highlight it by asking you, what would you expect what we get for those expectation values when we solve for atomic energy levels in electric fields? So we're discussing first question 1. OK, let's go to question 2. OK. Anyway, now I know I'm not boring you with the derivation I want to give you in the next 10 minutes. I want to give you the answer right away by drawing up another problem where maybe the answer is more intuitive. And this is we have a mass on a spring with spring constant k. And now the equivalent to the electric field which we switch on is-- we switch on gravity. And due to gravity, the object zags by an amount delta z. So the question is, what is-- and delta z is like the dipole moment. What is the gravitational energy gained by the object because it has fallen down? It is zagging down due to gravity. Well, I think you would agree that the answer is, it is mg times delta z. This is the work done by gravity with a minus sign. So the expectation value of the perturbation operator is minus mg times delta z. Or in electrostatic units, it's simply the dipole moment times the electric field. But what happens is the-- so this is gravitational energy. How much is the total energy affected when we switch on the gravitational field? 1/2 of it, because the negative energy which is gained in the gravitational field, 1/2 of it is used to stretch the spring. 1/2 of it goes into the internal energy of the system. So therefore, the electrostatic energy H prime-- H prime is the operator of the electrostatic energy. The answer here is A. But the total energy is B because part of the energy is needed to stretch the spring. And as I want to show you, stretching the spring is-- we admix to the ground state some excited state. This costs energy, like stretching the spring costs energy. And this is responsible for the fact of which is exactly 1/2. Well, I could stop here. I think I've explained it all, but let's follow the usual-- the standard approach. And let's do second-order perturbation theory and calculate the energy, calculate the dipole moment, and see that everything is as we expect now. So we want to do second-order perturbation theory. We know already the first-order term is 0. This was a discussion about parity. And in second-order perturbation theory, the state n has an energy, which is the unperturbed energy. And then in second-order, we have the matrix element to all other states. We square it. We divide by the energy denominator. We sum over all states m, but I make a prime here. Of course, we are not summing over the state-- we exclude n from the summation. And the prefactor here is electron charge squared times electric field squared. OK, pretty much that's the resulting second-order perturbation theory. So this is the energy and we want to relate the energy to the dipole moment. So the next step is now we calculate d. and we calculate d from the first-order wave function because we already get an effect in first order and everything here is about leading order. So the expectation value of the dipole operator-- so we take the expectation value of the dipole operator and we use the 0-th order, the unperturbed state, plus the first-order correction. And we know already that the diagonal terms do not contribute. This is a parity selection rule. So we get contributions from the course term, which is n0 the dipole operator with n1. So let's just suppress vectorial notation. We know everything is along the z-axis. So we have the 0-th order wave function. Our operator is z. And now we have to write down the first-order correction to the wave function. And the first-order correction is the sum over all other states. We make an admixture of the state m, and this admixture uses a matrix element. And here, we have the energy denominator. So what we obtain is-- we have the electron charge here from the dipole moment. We have the electron charge due to the perturbation operator. So it's electron squared. We have the electric field. And then, this is due to the admixture of the wave function with the dipole operator. And now because we take the matrix element of the dipole operator, we get another occurrence of the dipole operator. So therefore, we do first-order perturbation theory, but we take the first-order result and ask, what is the expectation value for the dipole moment? And that means the dipole operator, or the perturbation operator appears twice. And our result is as expected. Quadratic in the matrix element and it has this energy denominator. So the definition is that a dipole moment is alpha times the electric field. So therefore, all that equals alpha. And if you compare now the result for the dipole moment with the second-order perturbation theory for the electric field, for the energy, we find-- here's a factor of 2, but there is no factor of 2 up there. We find that the energy or the energy shift delta En, it has exactly the same matrix element as the polarizability. It is-- yes. It is this, 1/2 alpha epsilon squared. Since the perturbation operator, I'm just writing it down here, was dipole moment times electron, that means that the energy shift is-- and this is what we expected now, is 1/2 times the expectation value of the dipole moment times the electric field. So now we have obtained with a quantum mechanical calculation the result. I told you that the energy shift of the energy levels is 1/2 the dipole moment times the electric field. Let me just redo the calculation in a way I like. And this is I want to determine now the total energy, but sort the terms in a little bit different way. So I want to know, what is the energy in our result? And what we do is we are calculating the energy using our wave functions. We take the total Hamiltonian and take the wave function. So this leads us to three terms. One is the unperturbed energy. The unperturbed energy, the energy contribution of the first-order correction. This is the part due to H0. And the part due to H prime is simply the dipole moment times the electric field. So the first part is, of course, simply the energy E0 times the norm of the wave function n0. For the second term, we use the first-order perturbation theory for n1. This is our sum over m. Em minus E0. m H prime 0. Because n1 is on either side, this is the amplitude of the state n1. We have to square it. And since we calculate what is the expectation value of H0, we multiply with the energy Em. OK, so we are done. We calculate the total energy. We get three terms. One is the unperturbed energy, one is the dipole energy in the electric field, and one is the extra term, which I want to discuss. Actually, this term is the internal energy, which would correspond to the stretching of the spring in Hooke's law. Now, in order to show it to you explicitly, I want to use E0 equal 0 for the energy. Because then this term is 0. I can neglect this. And one of the squares, 1 over Em squared, cancels with the Em. It confused me for a while. If I don't set E0 to 0, the result looks different. But what happens is, if you do perturbation theory, there are certain issues with the normalization of the wave function. And the wave function n0 has to be-- the contribution if you look at the wave function in perturbation theory of a state, the 0-th order wave function has an amplitude of 1. And this amplitude of 1 only changes in second order. So since I'm doing a second-order calculation here, I have to include those non-standard terms. But I can also bypass it by setting E0 to 0, then the second-order term in the norm doesn't matter. So in other words, if you set E0 to 0, you make your life easier. If you do not set E0 to 0, you have to include some more terms in your calculation. But the result is-- just one second. But yeah, the result which I wanted to emphasize is this one here. It is a positive energy. You can immediately inspect that t positive energy is the dipole moment times the electric field over 2. This is exactly analogous to the energy of the spring in the gravitational problem. So in other words, this is the energy, internal energy, because we admix excited states to the ground state. This crosses energy and it exactly accounts for the occurrence of the factors of 1/2. Anyway, this is just the standard theory of the DC stark effect of the atomic polarizability, but I put a little bit of emphasis on those factors of 1/2 and tried to explain in greater detail the contributions to the AC stark effect-- to the DC stark effect which come from the electrostatic energy and which come from the internal energy. Questions? Yes. AUDIENCE: Sure. I have a study question. What is allowing you to use non-degenerate perturbation theory? What's the operator that [INAUDIBLE]? PROFESSOR: Well, I'm looking-- what allows me to do non-degenerate perturbation theory. Well, I assume we don't have degeneracies. If you would go to very high Rydberg states-- and actually, we do that not today, but on Monday-- we are looking at a situation where the splitting between states of different L become so small that the electric field mixes them. Then, we have to do degenerate perturbation theory. And that means we get now a linear term, linear stark effect, not a quadratic stark effect. Here, I would say we are doing perturbation theory of the ground state. It's an S state. It's not degenerate. Maybe your question is also addressing, but we have multiple ground states. We have hyperfine structure. However, the electronics-- the stark effect, the electric field does not couple to the spin at all. So therefore, all the magnetic energies-- the hyperfine energies are completely unaffected. And also, if we apply an electric field, all the hyperfine states experience the same shift and there is no coupling between them. So also, we have multiple ground states. We have hyperfine structure. It's a non-degenerate problem because there is no coupling between the different hyperfine states. In other words, the theory or the discussion of the DC stark shift is you have an S state, you couple with an electric field, and there is no degeneracy in the S state. Other questions? Well, then we've talked about alpha. The only parameter which comes out of this treatment is alpha. And now we want to discuss, how big is alpha? Or first, what are the units of alpha? Well, the units of alpha were-- you can go back to the second-order perturbation result. But the units of alpha were the charge, time [INAUDIBLE]. This was the dipole operator. It was squared. And in perturbation theory, we divided by energy because we had an energy denominator. Well, we can write that as q squared over l times l cubed. But q squared over l is the Coulomb energy. And therefore, when I'm interested in the units, the units cancel. So therefore, we find that the unit of the polarizability, at least in [INAUDIBLE] unit or atomic units, which I've chosen here, is simply the volume. The question is, what volume? Well, if you would calculate the polarizability for hydrogen, and simply make the assumption that the only important matrix element goes from the S to the P state, then we have a matrix element which is on the order of [INAUDIBLE]. And the energy splitting between the first ground state and first excited state is three quarter of the Rydberg constant. So for hydrogen in the 1s state, if you only use the coupling to the 2p state, we find that alpha is the Bohr radius cubed. And the prefactor is 2.96. If you do the summation over all states, the prefactor would be 4.5 because there are higher states, especially continuum states, which contribute to the sum. We have only five minutes left, but that allows me to show you that this is not a coincidence that we obtain-- here, what we obtain is the Bohr radius cubed, which is pretty much the volume of the hydrogen atom. But we can now do an approximation. It's not really relevant, but it has an historic name-- [INAUDIBLE] approximation. It's just nice to show how things work out. We have a second-order matrix element, so we couple the state n with the operator z to a state m. But if we assume that all energy denominators can be taken out of the summation by assuming that we have some kind of average excited energy, then the sum of-- maybe I should have said it the sum m z n, which we sum over m and it just cancels out. So what we have is, if you take the energy, an average energy denominator out of the summation, what we find is that what matters is the matrix element z squared. And we can even assume that in the energy denominator, the excited state energy is negligible. The hydrogen atom has a binding energy of 1 Rydberg and the first excited state has a quarter Rydberg. So at the 25% level, we can set that to 0. So I'm waving all my hands, but I'm getting a simple expression for the polarizability in the ground state. And this goes as follows-- the ground state energy is-- we have discussed Coulomb energy, Virial's theorem, and all that. We need just 1/r in the ground state. And for the z squared matrix element, we can simply say for an S state that it is x squared y squared z squared. It is 1/2 of r squared in the ground state. So therefore, continuously waving our hands and making approximations, we find that the polarizability is r squared expectation value divided by an r to the minus expectation value. So this is some r cubed expectation value which is an atomic value. So you see the nature of the perturbation expression suggests that cannot be anything else than the atomic volume. I sort of like that because when people discuss, for instance in my group, does lithium or rubidium have a bigger polarizability? Well, the bigger atom has a bigger volume and the more fluffier atoms have the larger polarizability. And that's pretty much based on that result. Now, let me finally do a comparison. There is another system for which you have done calculations of the dipole moment. And this is in classical E and M for conducting sphere. For conducting [INAUDIBLE] electric field, you can exactly solve the boundary conditions, the boundary value problem, get the electric field, and find the dipole moment. And the exact result is that the dipole moment is the electric field times the cube of the sphere. So in other words, the dipole moment, or the polarizability of this sphere, is-- and neglecting factors, which are only factors of unity. The dipole moment-- sorry, the polarizability of a conducting sphere is the volume of the sphere. The dipole moment of a hydrogen atom, or using [INAUDIBLE] approximation for all simple atoms is the volume of the atom. So I find it sort of interesting that when it comes to dipole moments and to polarizability, that atoms pretty much behave like metallic-conducting spheres of the same volume. Any questions? OK, then let's stop here and we meet again on Monday.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
11_Atoms_in_External_Fields_III.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good afternoon. So we are still in a discussion of atoms in external magnetic fields. And we are working our way up from simple static fields to time dependent fields. Last week and the week before, we covered external magnetic fields, Zeeman shifts, different coupling limits, strong field, and weak field. Last class on Friday, we talked about what happens when you put atoms into DC electric fields. So what we did was simple. Lowest order, which means in this case, second order perturbation theory. And we derived an explicit expression for the polarizability, alpha. And this polarizability tells us how energy levels are shifted quadratically with the electric field. I put some emphasis in talking about what is inside perturbation theory and identified with you that, yes, if you have an electric field, we have electrostatic energy. But in order to polarize the atoms, we have to create an internal energy. In a concrete example, if we have an s state, we have to mix a p state to create a dipole moment. And this costs energy. Exactly in the same way as when you have a spring and pull on the spring with gravity, you gain gravitational energy. But you have two pay exactly half of it to create internal energy in your spring. And this is actually the reason for this factor of 1/2 as we discussed in great length. Question is do you have any questions about that part? Because we want to go to the next level. Perturbation theory? DC polorizability? So anyway, the menu for today is we have done perturbation theory here for weak electric fields. But the question came already up in class, is it really valid? Or for what regime is it invalid? So today I want to talk to you briefly what happens when we go beyond perturbation theory. When we go beyond the quadratic Stark effect. And that leads us to a discussion on stability of atoms in strong electric field and field ionization. I like to sort of feature it because it allows me to tell you something about the peculiar properties of Rydberg atoms. And also the ionization of Rydberg atoms through electric field. This is how people in our field create cold plasmas. And it's also a way to do a very sensitive detection of atoms. So what I'm telling you today is interesting for its own sake, but also because it's an important tool for manipulating atoms, creating plasmas, or sensitive detection. So this will probably only take 10 or 20 minutes. And then we want to go form DC electric fields to AC electric fields. So we then discuss the AC polarizability. And, well, that will take us from perturbation theory in a time independent way what we have done now for DC fields to time dependent perturbation theory. So all the topics are rather basic aspects of quantum physics. But as usual, I try to give you some special perspectives from the atomic physics side. So in perturbation theory, we have a mixture of other states. And this said mixture is done with the matrix element. And in perturbation theory, we always have an energy denominator. Intermediate state to ground state. And if the electric field is smaller than this value, then we have mixture of other states into the ground state. So usually when we estimate the validity of perturbation theory, we look for the state which is closest to the ground state. For which the energy denominator is smallest. So therefore, i here is the nearest state, but, and this is important, of opposite parity. Otherwise, because of the parity selection rule, the matrix element would be 0 and electric field does nothing. I made a comment on Friday, but let me do it again. That, of course, means that when we apply electric field to our favorite atoms, we don't have to worry about the other hyperfine states. The other hyperfine states have the same spatial wave function. Have the same parity. So we are really here talking about the first excited state. And a concrete example for those of you who work with alkalis with an s ground state, the relevant energy scale here is the excitation energy to the first p state. So let's just estimate it for a single electron atom. And well, this is a hydrogenic estimate. The excitation energy to the first excited state. 1s to 2p is about 1 Rydberg. And the Rydberg is nothing, or Hartree, and that's nothing else than e square over a0. It's electric potential of two charges separated by Bohr radius. And if we estimate that the matrix element for strong transition is on the order of the Bohr radius, there are no other length scales in the problem, we find that the value for the electric field in atomic units is the charge divided by the Bohr radius squared. And this is really high. It is on the order of 5 times 10 to the 9 volt per centimeter. And this is 1,000 times larger than laboratory electric fields. Those fields would just create sparking along the electrodes. You cannot apply such high electric fields in a laboratory. So, therefore, nothing to worry about. If you have ground state atoms, the Stark effect in perturbation theory is all you need. Actually, to be a little bit more precise when I used the Rydberg and a0, I made a little bit overestimate. So that the typically the critical electric field which would cause a breakdown of perturbation theory for the ground state is around 10 to the 9 volt per centimeter. OK, so we are safe when we talk about ground states. But once we got to the excited state, we have degeneracy. p states have a three fold degeneracy and they are states with opposite parity. So we can't have mixing there. And actually as we will see for excited states, we have already a breakdown of perturbation theory at very, very small electric field. So let us discuss hydrogenic orbits with principal quantum number, n. And if you estimate what is the size of the matrix element? Well, it's not just a0. There is a scaling with n which is n square. The matrix element in higher and higher excited state scales with n square. Well, how does the energy separation scale? Well, let's not discuss hydrogen here. Because in hydrogen, energy levels are degenerate. And we would immediately get a breakdown of perturbation theory. Let's rather formulate it for general atoms. And we had this nice discussion about the quantum defect. So if we compare the energy of two l states, they scale as 1 over n square. But for different l states, we have different quantum defects. Delta l plus 1. And here we have delta l. So, therefore, doing an expansion in m which we assume to be large, we find that the energy difference is proportional to the difference between the quantum defects for the two states we want to mix with the electric field. And then again, the scaling with the principal quantum number is n cube. So, therefore, we find for the critical field using the criterion I mentioned above. We take the energy splitting. The energy the denominator which appears in perturbation theory. We divide by the value of the matrix element. Well, we had the Rydberg constant or two times the Rydberg constant is nothing else than b squared over a0. The matrix element was 1 over a0. And then we have the difference between quantum defects. And now-- and this makes it really so dramatic. We had an n square scaling of the matrix element. And we have an into the minus 3 scaling of energy differences. So that means the critical field scales by n to the 5. Go to an excited state with n equals 10. And the breakdown of perturbation theory happens 100,000 times earlier. So some of the scaling in atomic physics is very, very dramatic when you go to more highly excited state. If you threw in that quantum defects become very small, once you go beyond s and p states. The higher states just don't penetrate into the nucleus. So if l is larger than 2, if you have more complicated atoms, you may add the angular momentum of the core here. But so if you put n to the 5 scaling and the small quantum defect together, you find that critical electric fields are smaller than one volt per centimeter already for principal quantum numbers as low as 7. So that means bring a 1.5 volt battery close to your atom and you drive it crazy. You drive it out of perturbation theory. So what we have is the following. We have, of course, the structure of atoms in excited field. Here is the electric field. And let me just pick three n values. 18, 19, 20. And now the structure, of course, is that by the criterion I actually gave you was the criterion for the application of perturbation theory. If the energy levels are smaller than the matrix element, you have to rediagonalize between those levels. And that gives you then not a quadratic, but a linear effect. So, therefore, the structure is here that you have a region where you have strong l mixing. So you have to use degenerate perturbation theory for the different l states. But the many folds in n, the principle quantum number, are still well separated. But then, eventually, when you go further, you have a region which is called n mixing. So now the electric field is really completely rediagonalizing your states with different quantum numbers, n. So the result of this discussion is that highly excited states of atoms behave very differently from ground state atoms. And n to the 5 scaling is sensitivity to volt per centimeter. Level mixing all over the place. And that's why for those highly excited states, people have coined the word Rydberg atoms or Rydberg matter. That means atoms with higher principal quantum numbers. And the study of Rydberg atoms was pioneered, well, the early pioneering work by our own, Dan Kleppner. And then Herbert Walther in Munich who happened to be my Ph.D. Advisor. And finally, Serge Haroche, who was recognized with the last Nobel Prize together with Dave Wineland. So this is not just theory. What I am showing to you here is spectroscopy done at MIT by Dan Kleppner. So what is done here is from the ground state, they excite to an excited state. And whenever you hit an excited state, you see a signal. Let's focus on the upper part. So if at a given electric field, you scan the laser, you get one of those traces. You find peaks, peaks, peaks. And those peaks correspond to the different and many folds with strong Stark mixing. And eventually when you go to somewhat higher field, you have states all over the place. And this is the regime where you have done n mixing. So in the '70s and '80s, there was really, those experiments obtained a clear understanding and description of atoms in, well, I would say high electric fields. But the fields were not so high. It was just the atoms were very sensitive that already is at lower electric fields, they reached what is regarded as the high-field limit. Now the question is when we recorded signal, suddenly the traces stop. And that means the electric field is now so high that the atom no longer has a stable state. The electric field is so high that it literally rips the electron away from the atom. And if you go to higher states, the electric field where this happens is lower. This is the process of field ionization and that's what we want to discuss next. Question? AUDIENCE: Um, for what atoms? PROFESSOR: Those studies were actually done for lithium. It's actually peculiar. Dan Kleppner really liked hydrogen. Dan Kleppner is the person who tried to do almost all experiments with hydrogen. The famous BEC experiment. He also had the Rydberg experiment which was just in building 26 where Vladan Vuletic teaches now, his labs. This is where spectroscopy of hydrogen were done with the goal of a precision measurement of the Rydberg constant. So we excited hydrogen to some of those high levels. But as probably the experts know, the hydrogen atom is the hardest atom to work with. Because you need [INAUDIBLE] and alpha. You have this huge gap to the first excited state. And that's why if you can get away, you try to work with other atoms. And in those experiments, those people worked with the lithium atom. So the lithium atom has a quantum defect which will actually contrast to hydrogen where the quantum defect is 0. And this will actually be very, very important for field ionization as I want to discuss now. Other questions? OK, so at a given electric field, states, so to speak, just disappear. They're no longer stable. And this is the process which is called field ionization. So the phenomenon is that sufficiently strong electric fields ionize the atom. And whenever there is a simple model and I can give you an analytic answer, I try to do that. Because I feel a lot of our intuition is shaped by understanding simple models. And the simplest model for field ionization is just the classical model by calculating what is the settle point in the combined potential. The combined potential of the nucleus which is a Coulomb potential and the external magnetic field. So many features of the experiment can be understood by this simple three line derivation. So we have a potential. One part of it is the Coulomb potential. And focusing on one spatial direction here. And then, in addition, we apply an electric field. And the electric field creates a linear potential. And if I take the sum of the two, well, at large distances, see, it's the electric field which dominates. Then the Coulomb potential takes over. So that's how it looks like. So now we have the situation if we would put in atoms and we would look at the energy eigenvalues. At this point, this is the maximum excited state in the atom which is still stable. So what I want to derive for you is what determines the stability is simply the settle point. When the binding energy of the excited state for which we use the Rydberg formula is not stronger than the position of the settle point, the atom becomes unstable it and becomes field ionized. And we'll discuss a little bit later if this really applies to real atoms. The quick answer is for lithium and all the other atoms, it applies. For hydrogen, it doesn't. Because hydrogen has too many symmetries. Too many exact degeneracies. OK, so the total potential is the Coulomb potential plus the electric potential. What we need is the position of the settle point. Where we have a maximum in this one. This is a one dimensional cut. And this one dimensional cut has a maximum at this position. And by taking the derivative of the total potential, you immediately find this to be of that value. And now what we are calculating next is what is the potential energy at this point. And, well, this is just copy from the notes, e to the 3/2. And now what we want to do is we want to postulate that for field ionization, this should be able to the binding energy of the electron. Which is nothing else than the Rydberg constant divided by n square. OK, now here, we have the square root of the electric field from this calculation. So that means the critical electric field will scale as 1 over n to the 4. And this is a famous scaling which can be found in many textbooks. That the critical electric field for ionization equals, and now, that's the beauty of atomic units, it is 1/16, n to the 4. Beautiful formula derived from the settle point criteria. Of course, what I mean here is, and if you do the derivation, it's in atomic units. Which means in units of the atomic unit of the electric field which is e over the Bohr radius squared. So it's a simple model. It's an analytic result. The question is is it valid? Does it make any sense? And the answer is yes, but in a quantum mechanical problem, you would actually solve Schroedinger's equation, such a potential. But then, the onset of field ionization comes when tunnelling becomes possible for this barrier. But it is the nature of tunnelling that if you're a little bit too low, tunneling is negligible. You may have ionization rates of 1 per millisecond or so. And if you just go a little bit closer, it becomes exponentially larger. So, therefore, this scaling is very, very accurate. Because the transition where you go from weak tunneling, to strong tunneling, to spilling over the barrier, it's a very narrow range of electric fields. But, yes, people have looked at it in great details and have calculated corrections due to tunneling. So these are quantum corrections to the classical threshold which I just calculated. But now in hydrogen, a lot of n, l mixing matrix elements, matrix elements due to the electric field between an l states vanish. Hydrogen is just too pure, too precise. There is actually parabolic quantum numbers where you can exactly diagonalize hydrogen electric fields. And you find some stable states which do not decay. And they are above the classical threshold we have just calculated. So as Dan Kleppner would have said, the simplest of all atoms is the most complicated when it comes to field ionization. Because it has a lot of stable states above the classical barrier. So you can sort of envision that there will be orbits which are just confined to this region. And the electron never samples the settle point. And if you look at it this diagram on the wiki, these are actually calculations for hydrogen which include ionization rates. You will find that the states which are the ones which go down, which are on the downhill side of the electric field, you see always here marked an onset for ionization. And then you see a rapid increase in the ionization rate. But you also see those hydrogenic state which go upward in energy and they refuse to ionize. Because of the symmetry of parabolic coordinates and the things I've mentioned. Anyway, it's too especially to spend more time here in class on it, but I just think you should at least know qualitatively what is different for hydrogen. OK, so that's what I wanted to tell you about high electric fields and field ionization in principle. Let me now briefly mention important applications of field ionization. One is close to 100% detection efficiency for atoms. If you want to detect single atoms, for instance, you'll have a krypton sample and you want to find a rare isotope of krypton for dating the material. You need an extremely high sensitivity and you may just have a few single atoms in the sample. One way to do it would be that you excite the atom maybe through an intermediate state, to Rydberg state. And then by just applying a few volt per centimeter, you get an ion. And ions can be counted by particle detectors. You can accelerate the ion, smash it into a surface and count the particles with close to 100% efficiency. And this is one of the most sensitive detection schemes. I remember in the aftermath of Chernobyl, there was an interest in detection schemes for radioactive strontium. And on the wiki, I give you a reference where some people developed this resonance ionization spectroscopy for some atomic isotopes. Which unfortunately appeared more frequently after the Chernobyl disaster. And they developed a method based on excitation to Rydberg states. Which was more sensitive than other methods. You may ask why don't you ionize it with a laser? Well, the fact is, you can photoionize it with a laser. It's another alternative, but it takes much more laser power. Because if you excite into the continuum, the matrix element is much smaller. And often, if you want to have 100% ionization probability to go into the continuum, you need such high laser power that you may get some background of resonant ionization of other elements and such. So Rydberg atoms is really the smart way to go. You go to an almost bound to an almost unbound electron. And then it's just the electric field which causes the final act of ionization. I also want to briefly mention that the famous experiments on Rydberg atoms by Herbert Walther and Serge Haroche and collaborators would not have been possible without field ionization. I give you this one reference, but I'm sure you'll find more in the actually very, very nicely written Nobel lecture of Serge Haroche. I just read it a few weeks ago and it's a delight to read how he exposes the field. So they did QED experiments by having microwave transitions between atoms in two highly excited states. Let's say with principal quantum number 50 and 51. So that's now conveniently in the microwave regime. Then those atoms are passed through a cavity. And in this cavity, single atoms interact with single photons. And they have done beautiful quantum non-demolition experiments of single photons. I mean that's really wonderful, state of the art experiments. Back to Rydberg atoms and field ionization. Eventually, the read out of those experiments was you prepare atoms in the state 50 or 51. And afterwards, if they have absorbed or emitted a photon, they should be in a different state. So you were interested in a very high detection efficiency which could distinguish between 50 and 51. And of course there is a way to distinguish that. And this is because of n to the 4. You first apply an electric field which can only field ionized 51. And then you allow the atoms to propagate into a slightly higher electric field. And then the 50s are ionized. So the standard experiment is that you pass those atoms between two field plates. And by putting an angle between them that indicates the voltage is increasing. And then you have two little holes. You have some channel-drawn particle detectors. And the first detector will detect the 51s and the second detector will detect the lower lying states. So you can detect every atom with high probability, but also in a state selective way. So this way of doing state selective field ionization based on the discussion we had earlier, this is sort of the method of choice for experiments involving Rydberg atoms. Any questions about atoms in the electric fields? Well, then. Frequently we add time dependence. So what we should do next is atoms in oscillating electric fields. It's also a good way to review what we have done. Because the first thing to do now is we calculate the polarizability for AC fields. We calculate the AC Stark effect. And, of course, if in the AC Stark effect we said omega 2, 0 will retrieve the DC Stark effect. So in a way, what I'm doing for you now is I'm using time dependent perturbation theory to obtain a new result. But it will reproduce the result of time independent perturbation theory which we have just discussed. All right, so atoms in oscillating electric fields. Of course the next step is, and this is where we are working towards, we will in the next few lectures starting on Wednesday. Well, then there is Spring Break, but in the next lectures develop a deep understanding what happens when atoms interact with light. And oscillating electric fields is already pretty close to light. And I want to actually also show you that we capture already a lot of the phenomena which happen with light except for a full understanding of spontaneous emission. So we pretty much when we use an oscillating electric field, we allow the atom to interact with just one mode of the electromagnetic field. Which is filled with a coherent state. And this is so classical that we don't even need field quantization. We just use a classical electric field. And this already gives us the interaction of atoms with light except for spontaneous emission which involves all the other modes. So that's what we do later. But today, we just do the same classical description of an atom in an oscillating electric field. And this is the theory of the AC Stark effect. So all we do is application of time dependent perturbation theory. So our electric field is now time dependent. It has a value epsilon, polarization e hat, and it oscillates. Our perturbation Hamiltonian is exactly the same dipole Hamiltonian we had before for the DC Stark effect. But now it's a time dependent one. And it will be useful to break up the oscillating term in e to the i omega, t and e to the minus i omega t. I don't want to bore you with perturbation theory in quantum mechanics because you've all seen it in 805 or 806. I just want to jump to the result. You find more details about it on the wiki. But all you do is you parametrize your wave function, you expand it into eigenstates with amplitude an. And then you put it into the Schroedinger equation and assume that for short times the atom is in the ground state. The amplitude of the ground state is 1. And the amplitude of the excited state is infinitesimal that you can use the lowest order perturbation. This immediately gives you the first order result for the amplitude in an excited state, k. It only comes about because your initial state, the ground state, is coupled by the matrix element to the excited state. It's linear in the applied electric field. So what you do is you have the Schroedinger equation and you integrate it from times 0 to time 1. And since you have e to the i omega, t and e to the minus i omega t, you get two time dependent terms. e to the i. So what appears now is we have the frequencies omega n of the excited state. And now when we couple the excited state, k, to the ground state, what appears is the frequency difference between the two. That's pretty much the excitation gap. And we have a time dependent oscillation at omega. And then we have, of course, the same term where we flip the sign from omega-- let me write that more clearly. Where we flip the sign from omega to minus omega. So that's the second term. By integrating an exponential function with respect to time, we get an energy and a frequency denominator which is this one. So this is really just straight forward, most basic plain vanilla application of perturbation theory. The only thing I want to discuss because it sometimes confuses people is that we integrate from times 0 to the finite time. And then you integrate, you get contributions from the upper integration limit. And from the lower integration limit. And this contribution at the lower integration limit is actually a transient. It is at the frequency. It doesn't depend on omega. So it's a beat node between the ground and excited state. I could say at frequency omega k, but then it's together with the ground state, it's omega kg. And this is due to if you switch on a perturbation, it's like you suddenly switch on the drive of an harmonic oscillator. And you have some ringing. You have a transient at the natural frequency of the harmonic oscillator. It has nothing to do with the drive. It's just a sudden onset. It's transient and frequent due to the sudden switch on. So like any transient, we haven't included damping in here. We don't have spontaneous emission. Everything is undamped. But eventually, all those transients will damp out with time. And as we should have known also from the beginning, when we drive a system, when we switch on a perturbation, just think about an harmonic oscillator. You have a response at the harmonic oscillator frequency which is always transient in nature. And then you have a response at the dry frequency. And we are interested, of course, only in the driven response The driven response is at frequency plus minus omega. That's how we drive it. But now I have to be careful since I'm looking at the amplitude and I factor out the time dependence of the time dependence of the wave function, the time dependence of the eigenstates. I'm now looking for drive terms in this expression for the amplitude which are at frequency omega, but modified by the ground. Or the frequency of the ground state. But anyway, what I mean is the relevant term is the one which depends on omega. And in the following discussion, I simply drop the minus 1 because it's a transient. If you would switch on your time dependent electric field in a smoother way, this term would disappear. OK, let's now be specific. Let's assume the electric field points in the z direction. For an isotropic medium, the dipole moment, the time dependent dipole moment which we induce is also pointing in the z direction. And so we want to calculate now what is the dipole moment which is created by the drive term? By the driven electric Field And for that, we simply use the perturbation theory we have just applied. We take the ground state and its first order correction. And calculate the expectation value of the dipole moment. In the line at the top, we have the first order correction to the ground state wave function. And so we just plug it in. And what we obtain is result where we have the matrix elements squared. Remember, we do first order perturbation theory which is one occurrence of the matrix element, but now we take a second matrix element because we're interested in the dipole operator. So this gives us now a sum over matrix element squared. We have e to the plus i omega t and e to the minus i omega t. This means we get 2 times the real part of this expression. And the time dependent term is e to the i omega t. And then we have the term with plus and minus omega. Yes. And so, most importantly, everything is driven by the electric field. We can now just to write the result in an easier way, we can use this result with omega minus omega. And the real part and write that. We can write that as 2 times omega k, g. Omega k, g squared minus omega squared, times cosine omega t, times the electric field. And now finally, we have the matrix element. We have integrated the e to the i omega t function and such. So what we have here now is the time dependent electric field. And what we have here is the factor by which we multiply the electric field to obtain the bipole moment. And this is the definition of the now time dependent or frequency dependent polarizability. AUDIENCE: Excuse me. PROFESSOR: Yes? AUDIENCE: Why are we only getting the cosine and not [INAUDIBLE]? You only multiply by the other terms to get the [INAUDIBLE]? In terms of the cosine. PROFESSOR: I haven't done the math yesterday when I prepared for the class. I did it a while ago when I wrote those notes, but you know, one comment, I know you don't want to hear it, but it's a following. This system has no dissipation at all. And when I drive it, I will always get a response which is in phase with the drive. It can be cosine omega t, or minus cosine omega t. You cannot get a quadrature at this point. So if you find I've made a mistake, and you would say, there is a sine, omega term, I've made a mistake. I know for physical reasons, I cannot get a phase shift. You only get a phase shift in the response of a system to drive when you have dissipation. AUDIENCE: OK. I'll have to talk with you. PROFESSOR: But why don't-- I mean, I know the result is correct. And this is just, I hate to spend class time in trying to figure out if I've omitted one term. AUDIENCE: The real part. PROFESSOR: But the real part what I've probably done is-- AUDIENCE: I think that's probably what you did. Yeah. Thank you. PROFESSOR: I think that's what I did. So let me, therefore, also say we have now included the real part of it. Yes. OK, thank you. OK, this part here can actually-- that's how we often report. And that's how you often find in textbook the result, the frequency dependence of the AC polarizability. But I like to rewrite it now in a different way which is identical. And it shows now that there are two contributions. And I will discuss them in a moment. But those two contributions, one has in the denominator, let's assume we excite the system close to resonance. Omega is close to omega kg. Then one term is much, much closer. It's to a near resonant excitation. And from our discussion about rotating frames, we have all of this near resonant excitation corresponds to a corotating term. And the other one corresponds to a counter rotating term. We've not assumed any rotating fields here, but we find those terms with the same mathematical signature. And I will discuss that little bit later. But the physics is between the corotating and counterrotating term. And it's the corotating term which is the term which [INAUDIBLE] the so-called rotating wave approximation. I just want to identify those two terms and let's hold the thought for until we have the discussion. What I first want to say is the limiting case. We have not made any assumptions about frequency. When we let omega go to 0, we obtain the DC result. It is important to point out that when we have the DC result, we can only get to correct results because we have equal contributions from co and counterrotating terms. So that's sort of a question one could ask. You know, which mistake do you do for the DC polarizability when you do the rotating wave approximation? Well, you miss out on exactly 50% of it. Because both terms become equally important. I have deliberately focused here on the calculation of the dipole moment. Because the dipole, I simply calculated the dipole moment as being proportion to the electric field. And the coefficient in front of it is alpha the polarizability. You may remember that when we calculated the effect of a static electric field, we looked for the DC Stark shift for the shift of energy levels. We can now discuss also the AC Stark shift. Which is a shift of the energy levels due to the time dependent field. But I have to say you have to be a little bit careful. And sometimes when I looked at equations like this, there is a moment of confusion what you want to actually, what the question is. Because the wave function is now a time dependent wave function. It's a driven system. It's no longer your time independent Schroedinger equation and you are asked, what is the shift in the value of eigenvalues? So the AC Stark shift here is now given by the frequency dependent polarizability. And then, and I know some textbooks do it right away. And at the end of the day, it may confuse you. It uses an average of e square. So in other words, if you have an electric field which is cosine omega t, and you calculate what is the AC Stark shift, you get another factor of 1/2. Because cosine square omega t time average is 1/2. So anyway, just think about that. It's one of those factors of 1/2 which is confusing. Will, you have a question? AUDIENCE: So when we take omega goes to 0 from our previous results, are we still justified in neglecting the transient term? PROFESSOR: Yes. But why? What will happen is the transient term is really a term which has time dependence. And even if omega is 0, just the step function of switching it on creates an oscillation in the atom at a frequency which is omega excited state minus omega ground state. You may think about it like this. I give you more the intuitive answer. Take an atoms and put it in electric field. If you gradually switch on the electric field, you create a dipole moment by mixing at 0 frequency a p state into the s state. And that displaces the electron from the origin. But if you suddenly switch on the electric field, you actually create a response of the atoms which has a beat node between the excitation frequency of the p state and the excitation frequency of the s state. And what you regard is the DC response of the atom is everything except for this transient term. However, and this tells you maybe something about the different formulas in quantum mechanics when we talked about the time independent perturbation theory. We never worried about the switch on because we just did time independent perturbation theory and we sort of assumed that the perturbation term had already existed from the beginning of the universe. So it's not that we excluded the term. We formulated the theory in such a way that the term just didn't appear. But if you switch on a DC field, you should actually if you want an accurate description, do time dependent perturbation theory, you get the transient term even for DC field. And then you discuss it away the way I did. OK, so if you want, these are the textbook results. We could stop here. But I want to add three points to the discussion. You can also see at this point, we really understand that AC Stark shift theory as you find it in generic quantum mechanics textbooks. And now I want to give you a little bit of sort of extra insight based on my knowledge of atomic physics. So there are three points I would like to discuss here. The first one is the relation to the dressed-atom picture. The second one is I want to parametrize the results for the polarizability using the concept of oscillator strengths. And thirdly, I want to tell you how you can, already at this point, take our result for the AC polarizability and calculate how do atoms absorb light and what is the dispersive phase shift which atoms generate when they're exposed to light. Or in other words, I want to show you that based on this simple result, we have pretty much already all the information we need to understand how absorption imaging and dispersive imaging is done in the laboratory. So these are the three directions I would like to take it. So let's start with number one. The relation to the dressed-atom. So what I want to show you now is that a result we obtained in time dependent perturbation theory, we could've actually obtained in time independent perturbation theory by not using a coherent electric field which oscillates. But just assuming that they're stationary Fock states of photons. And this is actually the dressed-atom picture. I know I'm throwing now a lot of sort of lingo at you. It's actually very, very trivial. But I want to show you that you have a result where if you just open your eyes, you see actually the dressed-atom shining through. So what we had is we had an energy shift here. Which was let me just summarize what we have derived. Was this result with a polarizability which we derived. And let me rewrite the results from above. So this is why I made the remark about the averaging of the electric field. If you combine it with, as you will see, Rabi frequencies and dressed-atom picture, you'll need the amplitude of the electric field. And formulated in the amplitude of the electric field, you have a one quarter. And this is not a mistake. And you can really trace it down to the time average of the cosine term. OK, so this is now-- I'm really copying from the previous page. We had the difference frequency between state 1 and state 2. So I'm simply assuming that we couple two states. Now an s and p state, If you want. We have a matrix element which is the matrix element of the position operator, z, between state 1 and state 2 squared. We have an energy denominator which was this one. And we have-- so I'm just rewriting the previous result. But now I usually hate matrix elements when they appear in an equation. I mean, who knows matrix elements. What is the relevant thing when we couple two different states is the Rabi frequency. Frequency units is what we want. So, therefore, I have prepared the formula that I can take the matrix element with the electric field. And this is nothing else than the Rabi frequency squared. Or actually, one is measured as energy units. The other is frequency units. So there is an h bar square. So, therefore, I have now written this result in what I think is a more physically insightful way by explicitly identifying the Rabi frequency which couples ground and excited states. And I also want to separate, want to introduce the detuning of the time dependent oscillating electric field from resonance. And then I obtain this result. Doesn't it look so much simpler than what we had before? And it has a lot of physics we can discuss now. One over delta is sort of like an AC Stark effect in one limit. It's a far-off resonant case of an optical trapping potential. So this formula has a lot of insight which I want to provide now. The second part I give you the name and the interpretation will become obvious in a moment. Is the important Bloch-Siegert shift. It is the AC Stark shift due to the counterrotating term. So what I'm motivating here is just don't get confused. What I write down is very simple. And I sometimes use advanced language for those of you who have heard those buzz words. But what I really mean is what I want to discuss and you to follow are the simple steps we do here. So anyway, what I've just done is I've rewritten the result from the previous page by just introducing what I suggest as more physically appealing symbols. And now I want to remind you that this result for the AC Stark effect doesn't it look very similar not to a result, to the standard result of time independent perturbation theory? And, of course, you remember that in time independent perturbation theory, you get an energy shift which is the square of a matrix element divided by detuning. So it seems when we inspect our result for the AC Stark effect which came from time dependent perturbation theory, that this result here has actually-- if we map it on time independent perturbation theory, it has two terms. Both coupled by the Rabi frequency. But one has a detuning of delta. And the other has a detuning of minus 2 omega, minus delta. So it seems that the result for the AC Stark shift can be completely understood by a mixture of not one, but two different states with different detunings. And this is exactly what we will do in 8.422 in the dressed-atom picture when we have quantized the electromagnetic field. In other words, we have photons. Because then what we have is the following. We have the ground state with n photons, n gamma. Well, there are sort of n quanta in the system. n photons. But what we can do is we can now have one quantum of excitation with the atom. And n minus 1 photon. So it's almost like absorbing a photon. And this state has a detuning of delta. But then we can also consider an excited state. In other words, here, we connect to the excited state by absorbing a photon. But we can also, we talk about it more later, we can also connect to the excited state by emitting a photon. So this state has now not one quantum of excitation, there's three quanta of excitation. One in the atom and two in the photon. So, therefore, its detuning is now much, much larger. Actually, if we're on resonant, the detuning would be just 2 omega. But if you are detuned, there is the delta. So in other words, we can just say if it would do time independent-- I'm not doing it here. And I leave kind of all the beauty to when we discuss the dressed-atom picture in its full-fledged version. But all I'm telling you is that the result for the AC Stark shift looks like type independent perturbation theory with those two detunings. And I'm now offering you the physical picture behind it by saying, look, when we have the ground state with n photons, and we have those two other states, they have exactly the detuning which our results suggests. And yes, indeed, if you look at the many folds for those three states and we would simply do time independent perturbation theory, we would find exactly the same frequency shifts, the AC Stark shift as we just obtained in a time dependent picture. So in other words, what I'm telling you is there are two ways to obtain the AC Stark shift. One is you do time dependent perturbation theory assuming oscillating electric field. And that would mean in the quantized language you assume the electromagnetic field is in a coherent state. Alternatively, you can quantize the electric field and introduce Fock states. And then because Fock states are type independent, then these are the eigenstates of the electromagnetic field in a cavity. And now you obtain the same atomic level shift in time independent perturbation theory. So in other words, we can have photon number states. And do time independent perturbation theory. Or alternatively, we can use a semi-classical electric field. Which means we have a classical electric field. And then we can treat it in time dependent. All the textbooks generally use the latter approach. Because it uses a same classical electric field. But I can tell you, I strongly prefer the first approach. Because in the first approach, you have no problems whatsoever. What is the time dependent wave function? What is an energy level shift when you have a driven system? In a time independent way, everything just is simple and right there in a simple way. But it's two different physical regimes. Questions? OK, second point of discussion is the concept of the oscillator strengths. So what I'm teaching in the next five minutes is so old-fashioned that I sometimes wonder should I still teach it or not. On the other hand, you find it in all the textbooks. You also want to understand a little bit the tradition. And at least I'm giving you some motivation to learn about it that if I parametrize the matrix element with an oscillators strength. And most of your atoms, most of the alkali atoms have an oscillator strength for the s to p transition for the D-lines. Which is unity. You can actually write down what is the matrix element. What is the spontaneous lifetime of your atom without knowing anything about atomic structure. Just memorizing that f equals 1, the oscillator strength is 1 is pretty much all you have to know about your atom. And the rest, the only other thing you have to know is what is the resonant frequency of your laser? 780 nanometer? 589 nanometer? 671 nanometer? So the modern motivation for this old-fashioned concept is for simple atoms where the oscillator strength is close to 1, this is probably the parametrization you want to use because you can forget about the atomic structure. So but the derivation would go as follows. I want to compare our result. How an atom responds to an electric field. I want to compare this result to the result of a classical oscillator. So compare our result for the AC polarizability to a classical harmonic oscillator. So I assume this classical harmonic oscillator has a charge, a mass term, and a frequency. And both the atom and the classical harmonic oscillator are driven by the time dependent electric field. Which we have already parametrized by cosine omega t. OK. If you look at the classical harmonic oscillator, you find that the-- you drive it at omega and ask what is it time dependent dipole moment? It's driven by the cosine term. And what I mean, of course, is dipole moment of a classical harmonic oscillator is nothing else than charge times displacement. And, well, if you spend one minute and solve the equation for the driven harmonic oscillator, you find that the response, the amplitude, the steady state amplitude, [? zk ?] of the harmonic oscillator is cosine omega t. Times a prefactor which I'm writing down now. There is this resonant behavior. So that's the response of the classical harmonic oscillator. And yeah. So this is just classical harmonic oscillator physics. And I now want to define a quantity which I call the oscillator strength of the atom. So I'm just jumping now from the harmonic oscillator to the atom and then I combine a the two. And the oscillator strength is nothing else than a parametrization of the matrix element with between different states. But it's dimensionless. And it's made dimensionless by using the mass. By using h-bar. And by using the transition frequency. So the atomic-- let me just make sure we take care of it. This is the result for the classical harmonic oscillator. And this is now the result for the atom that we have found already they the result for the atom before. And now I'm rewriting it simply by expressing the matrix element square by the oscillator strength. And this here is just another expression for the polarizability alpha. Well, let's now compare the result of a quantum mechanical atom exactly described by time dependent perturbation theory to the result of a classical harmonic oscillator. The frequency structure is the same. So if I would know say we have an ensemble of harmonic oscillators, and the harmonic oscillators may have different frequencies and different charges. Then I have made those formulas exactly equal. And I can now formulate that the atom reacts to an electric field to time dependent electric field. Exactly as an ensemble of classical oscillators with effective charge. If I would say I have an ensemble of oscillators with effective charge, then the response of the atom and the response of the ensemble of classical oscillators is absolutely identical. So the atom response is a set classical oscillators with an effective charge which is given here. So, therefore, you don't have to go further if you want to have any intuition how an atom reacts to light. The classical harmonic oscillator is not an approximation. It is exact. So that result is relevant because it allows us to clearly formulate the classical correspondence. The second thing is as you can easily show with basic commutator algebra, there is the Thomas Kuhn sum rule which is discussed in all texts in quantum physics which says that the sum over all oscillator strengths is 1. So, therefore, we know if we have transitions from the ground state to different states, the sum of all the oscillator strengths to all the states can only be 1. And another advantage of the formulation with oscillator strength is that it is a dimensionless unit. It's a dimensionless parameter which tells us how the atom corresponds to an external electromagnetic field. I just need two or three more minutes to show you what that means. If you have hydrogen, the 1s to 2p transition is the strongest transition. And it has a matrix element which corresponds to an oscillator strength of about 0.4. So the rest comes from more highly excited states. However, for alkali atoms, the D-line, the s to p transition has an oscillator strength. I didn't write down the second digit, but it's with excellent approximation, 1.98 or something. So not just qualitatively, almost quantitatively, you capture the response from the atom by saying f equals 1. So if you use for the alkali atoms f equals 1, then simply the transition frequency of the D-line gives you the polarizability alpha. And as we will see later, because we haven't introduced it is, but it will also give you gamma the natural line beats. Because all the coupling to the electromagnetic field to an external field is really captured by saying what the matrix element is. And f equals 1 is nothing else than saying the matrix element is such and such. And, indeed, if I now use the definition of the oscillator strength in reverse, the matrix element between two transitions, between two states is the matrix element squared is the oscillator length times one half. And if you just go back and look at the formula, you find that this is now dimensionless. So we need now because the left hand side is the length square, we need two lengths. One is the Compton wave length which is h-bar over 2 times the mass of the electron. And lambda bar is the transition wave lengths, lambda divided by 2 pi. So, therefore, I haven't found it anywhere in textbooks, but this is my sort of summary of this. If you have a strong transition, and strong means that the oscillator length is close to 1, then the matrix element for the transition is approximately the geometric mean of the Compton wave length of the electron. And the reduced wave lengths of the resonant transition. So now if you want to know what is the matrix element for the d-line of rubidium, take the wave length of 780 nanometer. Take the Compton wave lengths of the electron and you get an accurate expression. I know time is over. Any questions? All right.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
22_Coherence_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu PROFESSOR: Good afternoon. We are on the finish line. Two more weeks to go. Our last chapter is coherence, and I can promise you this chapter of coherence has some highlights, so we are not going to and more boring subject. Actually, some of the best subjects, some of the most exciting topics are still to come. So today, we continue our discussion of coherence. As I pointed out last week, we first talk about coherence in single atoms and then coherence between atoms. In the first part on coherence, I want to come back to this topic of spontaneous emission, because many of us have deep rooted misconceptions about what spontaneous emission is. We discussed on Wednesday that spontaneous emission is not so spontaneous as many of us assume because it's a unitary time evolution with an operator with a term in the Hamiltonian. It is exactly this operator which takes the wave function of the total system, the atoms and the light, to whatever it is later on. There is no random phase, there is no random variable in this time evolution, exclamation mark. But there are certain aspects associated with spontaneous emission, and I want to address them. On the other hand, if you think about spontaneous emission in the most fundamental way, the first thing you should think about it, vacuum Rabi oscillation. Here you see in the simplest possible system what spontaneous emission can do for you. The way we want to discuss an important aspect of spontaneous emission, we want to go beyond the vacuum Rabi oscillation, is the following. We start with an atom in the ground state and the cavity is in the vacuum state, but now we take a short pulse of a laser and we prepare the atom. And because the laser outputs a coherent state, the coherent state has a well defined phase, and this phase appears in the superposition between ground and excited state because this superposition is created with the matrix element which has the electric field of the laser. But then we allow spontaneous emission to happen, and spontaneous emission to happen means we take our operator which I just showed you, we propagate forward in time in such a way that we just go through half a cycle of a vacuum Rabi oscillation, which means everything which was in the excited state is now in the ground state. And by just exactly propagating this system forward in time, we obtain this state, and that's something I hope very, very insightful which we arrived at the end of the last lecture, that the quantum state of the atom has been perfectly mapped onto the photon field. So all the information which was in the atom before spontaneous emission is now available in the photon field. So the next thing to address is the phase phi. What is the phase of the spontaneously emitted photon, and this is what we want to understand now. So how well can we measure the phase phi? You should first assume the phase phi is perfectly determined with extremely high accuracy if you use a laser beam which has a macroscopic electric field. The phase phi is a classical variable and can be determined with arbitrary precision. And therefore the phase phi, which we have imprinted first into the atomic wave function and then in the photon field, is an exact number. It comes from the laser beam. What I'm showing here is phase space plots for the photon field. I know we talk about photons and two dimensional phase-space distribution mainly in 8-422, but I think the pictures speak for themselves. A lot of you have seen the harmonic oscillator and harmonic oscillator, if you start to prepare the system here, this is position, this is momentum, the system evolves in a circle. A lot of you have seen, if we regard photon states as states of a harmonic oscillator, which they are, that you have flux states, which are just circles, or the vacuum state is just a tiny circle at the center. And if you have a coherent state, a coherent state is maybe a little blob out there, and for the coherent state, you can determine the phase because the angle of this little blob relative to the origin is well determined. I think you all have seen a version of that. So anyway, what is done here is I show you this phase-space plot for the photon field, and what happens is if initially, the excited state was zero, this is just the ground state of the harmonic oscillator. It's a circle. If the excited state was occupied with unity probability, it's a flux state with n equals 1, and here you see the phase-space plot of a flux state with n equals 1. And of course, you realize if you have exactly one photon or one atom in an excited state. There is no phase information left because the phase is actually the relative phase in the superposition between ground and excited state. If you have an excited state, only an excited state with a phase factor, you know if a phase factor can simply be factored out of the total wave function, it's never measurable. What is measurable are phases which are relative phases between two amplitudes which are populated. And of course, not surprisingly, if we now vary the excited state fraction of the atom, that's a probability to have a photon in the photon field from zero or one in between, we sort of see that this phase-space distribution, it points along the 45 degree axis and we can measure the phase. And the most accurate phase measurement can be done if the superposition between ground and excited state is 50-50, or, talking about the photon field, we have a 50-50 superposition state between no photon and one photon. But the phase here is indetermined and the phase here has quite a bit of variance because if you have a single photon, there's only so much accuracy for the phase. It would require more discussion, but sometimes you talk even about an uncertainty relation, delta n delta phi equals 1. So if you have one photon, you can only measure the phase with precision on the order of unity. If you had millions of photons, then you can do very accurate phase measurements. So what we have is-- let me just summarize the conclusion. So the phase phi is best defined in the atom, and therefore also in the photon field, when we have an equal superposition of spin up and spin down, of ground and excited state. And you also get that from the Bloch vector picture, if you have a Bloch vector which is pointing like that, it doesn't have a phase, it's just pointing up. If it's pointing down, there is no phase. But if it's a 50% superposition state, it points in the xy plane and you have the best definition of the relative phase of the amplitude between ground and excited state. So I mentioned this Heisenberg uncertainty relation. The fact is just looking at these phase-space plots, you realize the angle which we can determine here for the photon distribution will have quite a variance, but now I want to discuss with you how would we actually go about it, how would we measure the phase of the photon field? And this requires a homodyne experiment, a beat experiment where we interfere the emitted photon with a local oscillator, which is the laser beam which was used in the first place to excite the atom. And what we will find out, and it's clear that we cannot obtain a sharp value of the phase, but these fluctuations in the phase do not come from any partial trace, do not come from any fluctuations in the Hamiltonian. Just to address that, when we write down the term in the Hamiltonian, the e dot d term, yes, depending on the basis set, depending how you define spin up, spin down, and what phase factors you put into your basis set, you may have a phase popping up in the Hamiltonian, but this phase is purely definitional. The phase I'm talking about is really a relative phase between two amplitudes, and it is independent of a phase which may be your choice by choosing the basis set in which you formulate the Hamiltonian. Therefore, when we measure the phase, and we find that there are fluctuations, they actually come form the quantum nature of the states involved. Let's talk about the measurement and let me set it up genetically. Here is our atom, here is the laser beam, and we want to create a Mach-Zehnder interferometer. Let me just use another color for the laser beam. Why don't we take sodium today, which has emission in the orange? The idea is the following. We have a laser beam which is used to excite the atom, and here we have a switch. And what we let through is only a certain pulse. Let's say if we want to have a coherent superposition between ground and excited state, it would be pi over 2 pulse. Then, after the atom has absorbed the pulse, we switch off the light pass. So then in the second stage, the atom can emit, and the emitted light interferes with the local oscillator, which is the laser beam, and we can measure the beat node on the detector. This is the scheme how we do a homodyne measurement. And so we assume we have a very short pulse which excites the atom. Then we switch off the laser in the upper pass and the light which reaches the detector for homodyne is only the light which has been emitted by the atom maybe a nanosecond later. So we do a homodyne measurement of the phase of the wave or the wave train emitted by the atom. And the distribution of measurements for the phase, I don't want to give you mathematical expressions, but it's pretty much what you can read from the drawing I've shown you. So for a pi over 2 pulse, we retrieve the phase phi, but with fluctuations. Let's now come to the interesting case that we have a pi pulse. The pi pulse prepares the atom in an excited state, and at t equals 0 after the excitation, there's absolutely no coherence. The density operator for the atom has just one in the column and row for the excited state. There's no off diagonal matrix element. There is no phase information. So at t equals 0, no coherence, no phase. So now we have excited the atom with a pi pulse, but there is no phase information in the atomic system, and that would also mean that when we now start mapping the quantum state of the atom onto the quantum state of the light, there won't be any specific phase for the light. We could say after the spontaneous emission is over before we do any measurement process, we have mapped a flux state of the atom onto a flux state, n equals 1, of the photon field, and there is no phase associated with a number state. But let's be a little bit more specific here. Let's assume we can have an ensemble of atoms, we can repeat the measurement many times, and let's ask the question, what happens after the atom which was originally in the excited state has decayed to 50%? Well, then we have a wave function which is a superposition of ground and excited state, and there is a phase phi now, but this phase phi is completely random. So for those of you who are concerned that I call it a wave function, you can be more specific in the sense of quantum Monte Carlo, that at any quantum Monte Carlo wave function, at any given moment you have a wave function, but the ensemble of your atom is now an ensemble of all those wave functions with a random phase phi. This is a way how you can decompose the statistical operator of the system, but the result is the phase is random. If the phase is random, that means no coherence. The statistical operator does not have an off diagonal matrix element. It also means that, if you would ask what is the ensemble average of the dipole moment, the dipole moment is given by the Bloch vector. Well, if all phases are equally populated in your ensemble, the dipole moment average is 2-0. But, of course, you have a d squared value, a value of the dipole moment, which is not zero. So here we have now a situation where the photon field has a random phase because we lost the phase information of the laser beam when we put the atom into an excited state, and you may now ask, what is the origin of this phase uncertainty? And at least the qualitative answer is it's vacuum fluctuations. You can take the concept of vacuum fluctuations a little bit further. I'm just mentioning it, but I will not work it out. The fact that the phase of this photon which is random is somewhat associated with vacuum fluctuations, you can address this question when you talk about two atoms. So we have two atoms. We excite them both with our pi pulse into an excited state, and then, as time goes by, we will have atoms which create photons. And at least as long as the atoms are well localized within optical wavelengths, you could play with the idea that if they're vacuum fluctuations, maybe the two atoms will see the same vacuum fluctuations. And therefore, indeed, you will actually observe correlations in the relative phase. So if you measure the phase of the light emitted spontaneously by the two atoms, you will find a correlation which is due to the fact that-- I'm waving my hands here, but that the spontaneous emission was triggered by the same random vacuum fluctuations. So the absolute phase will be completely random from time to time, but the relative phase will be correlated. But what we are talking about here is correlations between two atoms. We will talk later about superradiance, and maybe this will make it much clearer what it means if several atoms emit spontaneously together. Any questions? Yes, Colin? AUDIENCE: Are there any requirements on these two atoms being located within an optical wavelength of each other? PROFESSOR: Yes and no. In the simplest example of superradiance, we want to put them to within one optical wavelength, and then we do not have any phase vectors, but we will talk about it next week, that we also have superradiance in extended samples, and then we only get the superradiance, the coherence between atoms, into a smaller solid angle where the are different phases are very well defined. If you would now average the spontaneous emission over different directions, you would get propagation phases and the atoms would only be coherent in one solid angle but not be coherent in another solid angle. Other questions? That's, to the best off my knowledge at the most fundamental limit, what spontaneous emission is, how accurately a spontaneously emitted photon carries forward the phase of the laser beam which excited the atom, and then eventually when we have completely lost the phase because we excited the atom to an excited state. Everything that we discussed will be actually carried to the next level when we discuss superradiance because then we have n atoms-- n can be a big number-- which excite together, and if they emit photons, the phase of this n photon field can be very precisely measured. So some of the uncertainties we have here simply come from the fact that, if you have only one photon or one atom, there are naturally quantum fluctuations of any phase measurement. But that part will go away when we go to ensembles of atoms where we have many atoms, and superradiance is then the way how we can revisit the subject, how well can you retrieve the phase of the laser field from the spontaneously emitted photons. Other questions? Nancy? AUDIENCE: [INAUDIBLE] single atom? So pi over 2, I can see that we do a homodyne measurement and get the phase out. Do we need dipole moments for pi [INAUDIBLE], or this is just a science that we're going to use for many other things? PROFESSOR: What's the question? We have pi over-- AUDIENCE: What measurements do we need if we have just one atom? Do we make any measurements, or no phase information? PROFESSOR: I think the measurement is, in a way, what I indicated here. We excite the atom, then we switch off this pulse, and then we take this short pulse of light. It's a wave train which has a duration on the order of the natural lifetime of the atom, and this wave train is interfered with a local oscillator, and the interference term allows us to retrieve the phase. And if you use a strong local oscillator, then we pretty much retrieve the quantum limit of the measurement, and the quantum limit of the measurement is what I showed you in these cartoon drawings of the phase-space distribution. AUDIENCE: So essentially, we can [? read a ?] flux state like this? PROFESSOR: If you have a flux state and you repeat the measurement many times, we will measure random phase. So what happens here is-- let me put it this way. The homodyne detection is a way how we want to measure the phase, and whenever you want to measure the phase, you get a phase because the number you get from a phase measurement is a phase. But if you have a flux state which has not a specific phase but an equal probability for all phases between zero and 2 pi, then, if you repeat a phase measurement many, many times, you will get a random result for the phase. AUDIENCE: I think that's what my question originally was. What measurement would you perform for this [? pi phase? ?] Would you still do a phase measurement? PROFESSOR: It's your choice. If you want to do a phase measurement, that's a way to do it, and then for flux state, you will get a random phase. But maybe for the flux state, of course, you can say in hindsight, the flux state doesn't have a phase so maybe you shouldn't bother measuring the phase. The special thing about the flux state that it has exactly one photon, and so maybe you want to have a measurement which is measuring the special character of the flux state, namely that you have sub-Poissonian distribution of the photons. Of course, this aspect of just having one photon gets completely lost when you have a beam splitter and you have zillions of photons in your laser beam with all the Poissonian fluctuations in the coherent state and you superimpose it. But this is nothing else than complementarity. You can either measure the phase or you can measure the photon number, and the question is, what are you interested in? This is one aspect of coherence in a two level system, namely that we have a phase in the two level system and the question is, how can we measure it? And the answer is we can map it on the photon field and then perform a quantum measurement on the photon field. I want to continue with some other aspect of coherence in single atom. Let me just point out one important aspect about coherence in a two level system, and this is related to something very mundane, the precession of spin-- when it's a two level system, it's spin 1/2-- in a magnetic field. In other words, I just want to quickly remind you in a few minutes that for any two level system, we can always map it on spin 1/2. I was really emphasizing this message throughout the whole course. But for spin 1/2, if you think of spin up or down in a magnetic field, there is a very clear visualization of the coherence. If you have a coherence of position of spin up and down, the phase of the superposition decides whether the spin points in x or y. So the precession of a spin in the transverse xy plane is actually the manifestation of coherence, and it's not just the special coherence of spin 1/2 in a magnetic field because all two level systems are isomorphic to that. You can always use it as an intuitive visualization of what coherences are. So what I just want to point out is the relation to the quantum mechanical or classical precession of spin in a magnetic field, that it is simply an effect of coherence within one atom, coherence between two levels in an atom. So if spin points in the x direction, it is a coherent superposition of plus z and minus z, spin up and spin down, but this is a situation at time t equals 0. If we let time evolve, spin up and spin down evolve with the Larmor frequency, actually effect of 1/2, but with opposite phases because one has plus the Larmor, h bar omega Larmor over 2. The other has minus h bar omega Larmor over 2 as energy. In other words, if you look at the relative phase, it's a beat node with the same unsplitting between spin up and spin down. But that means now, due to this coherent time evolution of the two amplitudes, that the spin precesses in the xy plane. The statistical operator for the spin 1/2, we have 50-50 population in spin up and spin down, but now, the phase here precesses as e to the minus and e to the plus i omega lt, which means, if you use the statistical operator and find the expectation value for the x spin. It means we take the statistical operator describing the pure state of a two level system, we multiply with a Pauli matrix in x, and this is a prescription to get the expectation value for sigma x, and we find it's cosine omega lt. So the spin is precessing the x component changes cosinusoidally. That would mean the y component changes sinusoidally. Let me just contrast it to the case of no coherence, and this would mean off diagonal matrix elements are 0. then if you have a statistical operator where the off diagonal matrix elements are zero, in one minute, you can show that then any expectation value for the x or y component of the spin vanishes. In that case, if you have a statistical mixer between spin up and down, of course there is no phase determined, and it is the phase of the superposition state which tells you where between 0 and 2 pi the spin is pointing in the xy plane. I want to come back later on when I discuss an example of coherent spectroscopy, that if you excite coherently a superposition of spin up and spin down, you can perform some form of coherent spectroscopy, which I want to explain first in general and then come back to the spin as an example. When we talk about coherent spectroscopy, I want to just in 10 minutes or 15 minutes show you some spectroscopic techniques which exploit the coherence between several quantum states. I do it for a number of reasons. One is coherent spectroscopy actually allows us to obtain information about the level structure even if this level structure is much narrower than the Doppler width. So it is a sub-Doppler technique to exploit coherence. And before people had lasers, before people invented sub-Doppler laser spectroscopy, often, coherent spectroscopy was the only way how you could obtain detailed structure of the atom. The reason why I explain coherent spectroscopy is to just give you a little bit idea about that you appreciate how smart people were before lasers were developed, but also, it illustrates what coherence can do for us. It's a nice example for the concept of coherence. When I was a graduate student, textbooks had dozens of pages, 50 pages on coherent spectroscopy, the Hanle effect, quantum beat measurements. It's all old fashioned because with a laser, and especially cold atoms and the laser, we have such wonderful tools to go to the ultimate fundamental precision of quantum measurements. But still, coherence is important. Let me talk about one method, which is called quantum beat spectroscopy. The selling point about quantum beat spectroscopy is that it allows the measurement of narrow level spacings-- just think about Zeeman splitting in a magnetic field-- without any form of narrow band excitation. You can also put it like this. If you don't have any way to selectively excite levels, but you're interested what is the level spacing, but you cannot have a narrow band laser, have atoms which stand still and scan and get peak, peak, peak, what you can still do is you can just excite all of the levels at once. In other words, you hit the atom with a board laser like with a sledgehammer, and then you see a beat node, you see some blinking, a quantum beat between the excitation of the levels. That's the idea. We assume we have a ground state and then we have an excited state manifold, and in this excited state manifold, we have several levels distributed over an energy interval delta. Yes, we don't have a narrow band source. We may just have a classic light source, but if we use a short pulse that the pulse duration is much smaller than the splitting between energy levels, then we create a coherent superposition of those levels. So therefore, what we create at time t equals 0 is a coherent superposition of energy eigenlevels. And the important thing is that this is at time t equals 0, but now, when time goes on, each amplitude, each part of the wave function, evolves with its frequency omega i, and if we would then look at, let's say, the emission spectrum as a function of time, we will find that-- I will give you a little bit more details later-- that yes, there is a decay approximately with the natural spontaneous emission time with the inverse of the natural line widths. But we observe some oscillations which is the interference term of the different terms in the wave function. So therefore, if we would take this spectrum and perform a Fourier transform, we will actually observe different peaks. This is frequency, and the frequency peaks are at discrete frequencies corresponding to frequency differences between the excited state. And ideally, the widths of these is determined by the natural line widths. So in other words, what we have actually done is we have done a version of the double slit experiment. We have ground state, we had our excited states, e sub i, and our broad band source was creating a coherent excitation, and then we were observing the light which came out. We were performing a multi-slit experiment. We had a laser pulse and then we see photons coming out, but it is fundamentally not observable which intermediate state was responsible for the scattering. So therefore, we have in the Feynman sense several indistinguishable paths going through different internal states. And therefore, we get an interference effect. Some of what I'm saying we will retrieve later on when we talk about three level systems. We will also have situations that sometimes we go through two possibilities for the intermediate state, and if we have no way, even in principle, to figure out which intermediate state was involved, we have to sum up the amplitudes, and that's when we get a beat node. This technique is a Doppler free technique because, even if you take a single pulse from a light source, you have a Doppler broadening, which is k dot v, v, the thermal velocity, and this can be much, much broader. You will still see the quantum beats, maybe I should say in principle, because the beat happens at the much smaller frequency delta. Or maybe I should say that the Doppler shift is reduced by the splitting of the excited states over the frequency of the exciting laser. Of course, if you have your different atoms emitting at different frequencies, you have a Doppler shift, but since you measure the difference frequency, you only get the Doppler shift associated with the difference frequency. Now let me come back to the previous example I had about the spin 1/2 system. If you assume you have a spin 1/2 system, spin up and spin down, which is excited with a laser, which is linear polarization, you would then create a superposition of up and down which, let's say, is now a dipole moment which points in the x direction. A dipole moment which points in the x direction will not emit light along the axis of the dipole moment because of the dipolar emission pattern. It will only emit to the side. But I mentioned to you that the dipole moment or the spin, which is originally in x, will now oscillate with a Larmor frequency in the xy plane. So the picture you can actually have of such a quantum beat and quantum superposition is like the lighthouse. You have a searchlight at the lighthouse, and the searchlight is just rotating at the Larmor frequency. For instance, you wouldn't see light right now, now you don't see light, now you don't see light. It's really like a classical lighthouse which is emitting light at the Larmor frequency. So if you have a fluorescence detector which looks at the atoms from a certain direction, you will pretty much see the lighthouse effect that the fluorescence of this coherent superposition of atoms goes on and off, on and off, on and off. This is sort of a very nice visualization how you obtain what I showed here, a beat node in your detected signal. Let me talk about another aspect of coherence. And of course, they are all related. Coherence is always related to the phase, to beat nodes, to superposition. Let me now talk about one aspect which is related to delayed detection. When I was a graduate student and I learned about spectroscopic techniques, somehow I was so fascinated by techniques which could measure spectroscopically transitions better than the natural line widths. I don't know. Maybe from what I had read before, what I learned as an undergraduate, the natural line widths appeared to be the natural fundamental limit. So the topic I'm teaching right now has always had a certain fascination for me, but you will, of course, also realize that in the end, the answer is rather simple. Once you know the answer, most answers are very simple. So we want to talk about delayed detection. Let's say we excite the system. You can think about a quantum beat experiment. You have a short pulse and then your quantum beats happen. And now the question is, normally, when you do a measurement on a decaying system, you're always limited by the natural line widths, by the inverse of the lifetime. But now, maybe you want to be smart and you say, well, I start the detection, I only detect atoms after a time t0, which is much, much larger than the natural line widths. And the question I have is, can you obtain, with such a measurement, a spectral resolution which is narrower than the natural line widths? Well, we can give two possible answers. One is yes, because you're looking at atoms which have survived for a long time, so to speak. These atoms are longer lived. We have just selected atoms which happened to survive for several lifetimes. But then there should some lingering doubts. If you have a sample which undergoes radioactive decay, and you would go to your favorite supplier and buy uranium, which has already decayed for a billion years, it's the same uranium which existed a billion years ago. You will not be able to perform any measurement on your well aged uranium, which has a higher resolution than if you had lived a billion years ago and had done your measurement with younger uranium. So in other words, the exponential decay is self similar. It starts at any moment and it looks exponential no matter where you start. I didn't bring clickers today, but with which answer would you side? Is it possible or it's not possible? Maybe just hints of who thinks it is possible by taking advantage of the longtime survivors? OK. Who thinks it's impossible? A few. Good. The answer is actually depends. If you would go and just look at the longtime survivors, you would not be able to do a more precise measurement. You need a little bit of information from the earlier time. So in that sense, the question is a little bit deliberately confusing, and I want to show you how the mathematics work. It's just five minutes to show the mathematics of a Fourier transform, and the result will be if you have information about something at t equals 0, and then you look at the long time survivors, you in essence have a longer integration period for your measurement, and then the Fourier transform of that measurement can be very narrow. But if you do the dumb thing, you just go to the store and buy very well aged uranium or very well aged atoms, and you then start your measurement, you have no chance. You are always back to the spontaneous decay to the natural line widths. All I have to do is actually just a few lines of mathematics and Fourier transform. Let us assume we have the situation we discussed earlier. We have a quantum beat where we have a beat frequency omega 0. Just think about the searchlight, the atoms which oscillate with a Larmor frequency, and you have some cosine omega Larmor t factor in the intensity of the light you observe. But now, because the atom in the excited state is decaying, everything will decay with the natural line width. This is sort of what we observe, and the question is, if you observe that in real time, can we then retrieve spectral information from it which is more accurate than gamma? All I want to do is I want to discuss the Fourier transform of this function, s of t. Let me use dimensionless variables. We measure frequencies in units of gamma. We use a Lorentzian, which is just 1 plus x squared. We just said we want to start the measurement at time t0. So the question is, if you start later and later and later, do we get higher accuracy because we're talking to the survivors? So let's perform the Fourier transform, and let's use complex notation, e to the i omega t. I will measure times in units of the inverse line widths. So we performed the Fourier transform, and by doing e to the i omega t, I actually performed the Fourier transform for the cosine and for the sine by using the real part and the imaginary part of the complex number. So we will actually be able to look at the real and imaginary part. The Fourier transform has a real and imaginary part, so let me call the real part F of x and the imaginary part G of x. You can do the math. It's a straightforward integral. In both cases, will we find that if you do our measurement, well, the longer we wait, the more signal we lose. This is common to all delayed measurements. You're really now talking to an exponentially smaller and smaller signal. Also, because of the exponential decay, we get an envelope which is a Lorentzian. But then, and this is the interesting part, we have cosine xT minus x sine xT. So now we have factors which depend on capital T, and t is larger the longer we wait. It is actually those parts with sine and cosine which determine whether we can get resolution below the natural line widths. What is important is-- and this should be sort of an eye opener for you-- if you simply measure intensity, if you look at the power spectrum, you take the real part plus imaginary part and just look at the absolute value, then, because of cosine squared plus sine squared equals 1, all the cosine and sine part, the last part of the expression above cancels out and you find that you have an exponential loss of signal but your spectral distribution is always a Lorentzian. So you always have a Lorentzian line shape completely independent of the delay time, capital T. And this is what some of you maybe thought. If I start the measurement later and all I can do is look at the power of the emitted light, I have no advantage. I cannot go sub-natural. However, if you look at the function F of x for large values of x, you find oscillations. So if you look at the sine or cosine Fourier transform, the real or imaginary part separately, you find oscillations, and those oscillations, similar to Ramsey fringes, have a central peak. The central peak is narrow and the width is now given by not the inverse of the natural lifetime but the inverse of the delay time you wait for your measurement. So the fact is now we had a signal s of t which we assumed was a quantum beat with a well defined phase, and then it was exponentially decaying. If we would now perform a Fourier transform where we do a Fourier transform with cosine omega t plus phi, we can get a narrow signal, but we have no idea what phi is or if, in repetitions of the experiment, phi would be random, then this is sort of what the math does. If phi is random, it is the same as if we simply measure the power spectrum because we cannot distinguish between the cosine and sine Fourier transform. So if the phase phi is random, that means you only measure what was F plus IG before. In other words, the situation is extremely in the end very simple. If you have quantum beats which start with a well defined phase, and you know the phase was, let's say, zero here, and now you have the decaying function, and now you look at the quantum beat over there, well, in a way, you had n beats between t equals 0 and your measurement, and then your resolution goes with 1 over n, but you have to know what the phase was at t equals 0, and then, by looking at delayed detection, you can do spectroscopy below the natural line widths. So therefore, what is crucial here is knowledge, or at least reproducibility, of the phase phi, and then you can get narrow lines. Examples for techniques where you excite the system at t equals 0 and then you can do delayed detection of quantum beats. I mentioned earlier in the course the Ramsey spectroscopy where you have one Ramsey zone where you prepare your Bloch vector. Then the Bloch vector oscillates, and if you simply look at the phase angle of the Bloch vector after a very long time, you have very high precision but you're dealing with an exponentially small signal. Another example are heterodyne or homodyne techniques, but you need something which is phase sensitive in order to obtain sub-natural line widths. A final comment is if you want to get higher resolution with delayed detection, yes, you can get it, but you was exponentially in signal. And what does it mean in practice? Well, if you know your line shape, you know it's well described by a Lorentzian, it is better to take your full signal and use then an excellent signal to noise to find the line center and split the line. However, if there's any ambiguity, there may be different, not fully resourced lines under the Lorentzian and you don't know how to split the line, then it may be better to do delayed detection and clearly see the structure of the lines with sub-natural resolution. Any questions? This was coherence with two levels, coherent excitation, coherent observation, some spectroscopic techniques. Now we are ready to do the next step, namely, to talk about coherence in three level systems. If we have three levels, we could think about it, we have terms which connect level one to two, level two to three, and level three to one in all possible ways, but that's not what we want to assume here. The situation where we can discuss some fundamentally new effects is when we have two states connected through a third state. In other words, if we have two levels, we are not allowing any transition matrix element connecting the two. They are only connected through a third state. This is for obvious reasons called the lambda type system. You can turn it upside down and you have the V type system or, if the intermediate state is between the first and the second state, you have a ladder type system. But once you start driving it, it may not really matter. There may be a dressed atom description where, if you drive two states coherently in the dressed atom picture, you have degeneracy between this level and one more photon and this level. And then in the dressed atom picture, which includes a number of photons, the two levels have become degenerate. So therefore, it's very important for practical applications or how to implement it in an atom what kind of system you have, but for the description of those systems, some of the differences may simply disappear if you formulate it in the dressed atom basis. Of course, there's an important practical reason. Usually the lower states are ground states, the upper states are excited states. And here you have the opportunity, and that's why the lambda type system is the most important one, to have some coherent superposition mediated by the third state, and the coherent superposition is stable because it's a coherent superposition of ground states. If any form of coherent superposition involves an excited state, then you have short lived states, and they are often not so useful for certain phenomena. So if you think you know already everything about how atoms interact with light from two level atoms, I have to tell you that's not the case because a three level system has many new effects. One, of course, is that atoms can now interact with two electromagnetic fields, and those two electromagnetic fields can affect each other, and this can happen through coherent or incoherent mechanisms. In other words, you can say it simply. If you hit an atom with light and you have a two level system, there is no way how the atom can hide. It's always excited by the laser. But if you have a three level system, you may have a situation where you have destructive interference between what the two lasers can do to the atom, and suddenly, there may be a state where the atom is in the dark where the atom can hide from the laser beam. This is something which is fundamentally new and has no counterpart in a two level system. I've already pointed out that the lambda system is the most important one because it has two ground states which can be in a long lived superposition state. What we want to discuss as possible consequences is that in a three level system, you can realize a lasing operation without having inversion of the population of the ground and excited state. So if you always thought, if I want to build a laser, the first thing I have to do is make sure I have more atoms in the excited than in the ground state, yes, this is valid for two level system, but it is no longer valid for a three level system. The reason why you want to invert a two level system is you want to have stimulated emission from the excited state which is stronger than absorption from the ground state. But if you take advantage of quantum coherence, you may have a situation where two possibilities for stimulated emission add up coherently, but the two possibilities for absorption add up destructively. And therefore, you can avoid destruction. You have only stimulated emission, but you have not achieved that through inversion. You have achieved that through quantum coherence, a fundamentally new effect. So we have lasing without inversion, we have the phenomenon I mentioned already that atoms can hide in the dark if the two laser beams in the excitation mechanism destructively interfere. This is called electromagnetically induced transparency. Systems which have sharp resonances in three level systems are used for reducing the group velocity of light, which goes under the name slowing light, or even bringing light to a standstill, stopping light. And three level systems are also used for quantum mechanical memories for quantum computation. Any questions? This is an introduction. Let me connect special effects in a three level system to something which is very basic and you've heard about it, and this is optical pumping. If we set up a system which has two ground states, g and f, you may just think about two hyperfine states in your favorite atom, and then they are only coupled through an excited state, you can now drive the system with laser fields omega 1 and omega 2. Let me also use that example of optical pumping to introduce some notation which I will need to describe the system with a few equations. We will use energy level diagrams, and the energy is referred to the lowest ground state. So here, we have an energy splitting which is by omega gf, and the excited state has a splitting of eg. We will call the photons in one laser the photons created and annihilated with the operator a and a dagger, and for the photons for the other laser beam, we use c or c dagger. Now, there is a very simple solution for this situation, a very simple equilibrium situation, if you have only one laser beam. If you have only one laser beam, omega 1 or omega 2, it's clear what happens. If you have only one laser beam, let's say omega 1, it doesn't talk at all to the atoms in the state f. They are left alone. But the atoms in the state g are excited to the excited state, and then there may be a certain branching ratio for spontaneous emission, but let's rather call it fluorescence, two photon scattering. So there be a branching ratio to go back to that state or to go to the state f. If the latter happens, the atom doesn't interact with the laser light anymore. If it goes back to the original state, the atom will try again and again until after a while, all the atoms have been optically pumped into the state f. And the same would happen if you have a laser, omega 2, then you would pump all the atoms into the state g. So if you have only one laser beam, omega 1 and omega 2, then in equilibrium, the equilibrium population is 100% of the atoms are in state f or g respectively, and this is nothing else than the phenomenon of optical pumping. We have a very simple solution. We pump all the atoms into one quantum state if we have only one of the laser beams. But the question is now, can we have a similar situation when both laser beams are on? And what I mean by that is, is it possible now to pump all the atoms into a state which does not scatter any light, which does not react with the light, which is never excited to the excited state? The answer is yes, and this is what we want to derive right now. Before I go into any equation, the result is pretty clear. If you have, say, g and f and they are both excited, if the amplitude which you put into the excited state is the same but has opposite sign, the two amplitudes which are added in a time delta t destructively interfere and you have not put any amplitude into the excited state. And that means, if you have this initial superposition state where this complete destructive interference happened, this state will be dark all the time. But we want to assume-- also it doesn't matter-- that the two states, the two lasers have two different frequencies. So you cannot say, this is just the two lasers have a certain phase and then the laser field interferes and they reach a space which is dark. We assume that the atom is sitting at the origin. Again, we're not putting in motion effects, so it's an atom with infinite mass. Then we shine two laser lights on it and the atom is not in the dark. It's not at a dark fringe of the interference of the two laser beams because if you have two laser beams with two different frequencies, there will not be any place in space which is dark all the time. You create maybe interference fringes, but the interference fringes are rapidly running with a beat node, omega 1 minus omega 2. So the atom is not in the dark, but nevertheless, it will not scatter light if it is prepared in a suitable coherent superposition state. We describe this situation with a dipole Hamiltonian, and we make the rotating wave approximation. The Hamiltonian has three lines, three parts. One is we describe each of the laser fields as a single mode. I call the frequency now omega a and omega c, just to connect it with the operator c dagger, c. For the atom, we use the matrix, two by two matrices, so this is the matrix if the atom is in the ground state. Coherences are described by that. And of course, without any interaction with the laser field, the atomic Hamiltonian is atoms are in the ground state. the state f has zero energy. We use that as the origin of the energy. The state g has an eigenenergy of omega gf, and the excited state e has an eigenenergy of omega ef. But now the important part is that we want to have the coupling. And actually, I realized I was not saying it correctly. Omega 1 and omega 2, these are the Rabi frequencies of the two fields, and the two fields are at frequency omega a and omega c. So now we have the coupling between the excited state and the ground state via photons a and a dagger. And the coupling happens at the Rabi frequency omega 1. And then we have the second laser field, which is at Rabi frequency omega 2. We have the atomic raising and lowering operator, and we have the photon c and c dagger. That's a nice Hamiltonian. It has three lines. The important part here, which we have explicitly assumed is that each of the lasers, a and a dagger, c and c dagger, are only driving one transition. One field is responsible for connecting the state f to the excited state. The other field is responsible for connecting the state g to the excited state. In practice, this can be accomplished by you have maybe polarization. This is a plus one state, this is a minus one state, and the excited state is m equals 0. Then one laser beam is sigma plus, the other one is sigma minus, so it can be polarized and the two laser beams can only talk to one of the ground states. Or you can have a situation that you have a huge energy separation. Let's say you have a large hyperfine splitting and the two lasers are separated by frequency. I think I've set the stage, but I think I should stop here and on Wednesday, I'll show you in the first few minutes of the class that this Hamiltonian has a simple solution, which is a dark state, which is a superposition state of g and f. Any questions? See you on Wednesday. A few people haven't picked up their midterm quizzes. If you want them, I have them here.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
20_Line_Broadening_IV_and_Twophoton_Excitation_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Here is the menu for today. We're discussing line shifts and line broadening. And I want to finish up today this chapter by describing collisional narrowing, also called Dicke narrowing. Then I want to have two more shorter topics on two other aspects which lead to important line shapes and line broadening. So I want to quickly discuss the spectrum of emitted light by an atom. And I want to discuss collisional broadening. None of that will be done in-depth. The spectrum of emitted light is really open-ended, and we will have a more advanced treatment in 8.422. But I do feel if I show you all are kinds of line shifts and line broadening, I should at least mention here the basic things. And collisional broadening, I'm not sure how many atomic physics courses you will find which teach about collision broadening, because this is the physics from gas discharge lamps, the old-fashioned physics. However, I've realized that a lot of people know now about clock shifts, and mean field broadening, and mean field, and the ultra-cold gases. And they have no idea that similar physics actually happens in ordinary gas. So at least in terms of broadening your understanding, I want to talk just 10 minutes about collisional broadening. I've completely eliminated from this course the quantitative description about collisional broadening, but I want to show you a few cartoons and put some pictures into your mind. And probably, we have time to start the next chapter, which is actually a pretty short one, on two-photon transitions. For me, actually, Dicke narrowing is really the highlight of the course, because it provides conceptual insight into what really line broadening is, and to realize that collisions can narrow lines and not just broaden them. This is sort of subtle and insightful. And similar to photon transitions, it's short, but I hope it's also a highlight, because there's so many people-- you and other people-- who are sometimes struggling-- a photon is absorbed and is emitted. Usually, the photon is not absorbed, the photon is scattered. And whenever you think about photon in, photon out, you really should think about two-photon transitions. So the framework of two-photon transitions allows me now to give you the tools how you should really think about whenever you have atoms interacting with light. The light is not absorbed, the light is just scattered. And so you need that. On the other hand, based on all the descriptions I've given you about light-atom interaction, two-photon transitions would just mean we need one more order of perturbation theory, and then it's the same thing you have already learned. So it's a highlight, and it's to some extent also a review. You will actually recognize, in some situations, the difference between two-photon processes and one-photon processes. It's not so big, it's just you have to-- you'll see. You have to use a different Rabi frequency and some different concepts. OK. So that's, I think, really an agenda of highlights. Let's go back to the physical picture we drew up on Wednesday about Dicke narrowing. I just explained to you that when we have an atom which is trapped and tightly confined, that the spectrum consists of a sharp light and sidebands. And now I was addressing the situation-- what happens if you have an atom which is surrounded by buffer gas? I would say, well, that's a cheap trap created by nature, because when the particle wants to fly away, it collides with a buffer gas atom. So it stays put. So it's a cheap trap, but it's also a lousy trap, because there is some randomness in the number of collisions. But I'm waving my arm, so that maybe you can go along with the picture that it is an ensemble of traps which have very different trap frequencies. And then we would expect, based on our understanding of the spectrum of confined particles, that we have a carrier, which is always the same, at the electronic excitation frequency. But then we have sidebands at the trap frequency. But those sidebands are now smeared out, because we do not have a defined harmonic oscillator potential here. So therefore, I sort of tried to lead you in the way that that may be one way how you can think about the situation. And well, now I want to give you a different way to look at it. And let's see how those things come together. So if I would ask you, give me a quantitative estimate how wide this line is, do you have any idea how we can do that? Or let me even put it this way. It's again one of those kind of things-- you have the knowledge, but to put it together is hard. But I would guarantee everybody in this room has the knowledge to write down in one line what is the width of this line. What is the width of this line? It's an inverse time. What time? Coherence time. Yeah. But now, if you have an atom which is now starting and it hits buffer gas, how would you estimate the coherence time? AUDIENCE: Mean free pass. PROFESSOR: Mean free pass will be important. But the coherence time-- how long will the atom talk to the laser beam in a phase-coherent way? So what do we have to compare to the wavelengths? AUDIENCE: Mean free pass. PROFESSOR: The mean free pass. This will be the important parameter. But if the mean free pass is much shorter than the wavelengths? Well, the atom has just moved a tiny bit, one mean. Is it still coherent at this time? AUDIENCE: It's shorter than the wave. PROFESSOR: If the mean free path is shorter than the wave. So it will do many collisions. But since it stays localized, it will still coherently interact. When will it stop coherently interacting? AUDIENCE: Mean free path. PROFESSOR: No, the mean free path is always smaller. Let's just assume that. I hear "diffusion." AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes. The atom has a short mean free path, but it will do a random walk. And the movement it diffuses by more than a wavelength, it has randomly changed its position by a wavelength. And that would mean it experiences the phase of the drive field in a random way. End of coherence. 1 over this time is its line width. OK. So let me write that down. So based on the concepts we have learned by looking at Doppler broadening and all that, we realize the important aspect is, when do atoms in an ensemble randomly move by a wavelength? So therefore, our estimate is now-- estimate the widths of the sharp peak. We use a model which is diffusion. We know that in diffusion, the random walk, the RMS position in atom has moved away. Ballistic motion is linear in time. Diffusive motion is the square root of time, or z square is linear in time. And we have the diffusion constant. Diffusion by, well, lambda or lambda bar after time, which is wavelength squared. So therefore, we would expect that the full width at 1/2 maximum of our peak is k squared-- k, the wave vector of the light, and D is the diffusion constant. Well, since you mentioned the mean free path, let me, already at this point-- I wanted to do it earlier, but it fits in very well here. In an ideal gas, the diffusion constant is given by some thermal average speed times the mean free path. And if you look up some textbooks, there's a factor of 3. So our delta omega, which is k squared D. Let me write it as k times v bar. And the other k I write as 1 over lambda bar. So this is k squared, and I need l. So this is nothing else than k squared-- I take this expression, k squared l. This is one k, this is the other k. And v bar times l is the ideal gas expression for the diffusion constant. But now you realize that k, dot, v-- since in a gas, the most probably velocity is also the momentum spread, this is nothing else than the Doppler width. So therefore, we find that the line widths in Dicke narrowing-- if we have buffer gas and we have diffusive motion-- is much smaller-- and this is why it's called Dicke narrowing-- than the Doppler broadening if the mean free path is much smaller than the wavelength. So this is where the mean free path comes in. What happens if the mean free path is much longer than the wavelength? What line widths do we then get? Do we then have a line width which is larger than the Doppler broadening, or do we get the Doppler broadening? Let's have a clicker question. So if l is much smaller than lambda bar, is delta omega-- and this is your option A-- equal to? Or option B, larger? OK. So what do you think? It's always a question when we derive something, how seriously you should take what we derived. So this expression for delta omega Dicke shows that the line width would get larger and larger the longer the mean free path is. And the question is, is that correct or not? OK. Any more takers? Whoops. Oops. Sorry, press again. I erased it by clicking the wrong button. You've already made up your mind. You know the answer. So stop, display. Yes. The majority answer is definitely correct. What happens until the first collision happens, you just have normal Doppler-- you don't have diffusive motion, you have ballistic motion. And if the line width is already determined once the atoms have spread out by a wavelength, that's it already. And then if then the particles collide, it doesn't matter. So what we have assumed is we've assumed that the relevant model for the spread out to a wavelength is a diffusive model. And if we are past that-- because the mean free path is larger than the wavelength-- we have to go back to normal Doppler broadening. OK. But now let me calculate Dicke narrowing by using the formalism we have developed. So let's use the correlation function for that. And we know that the line width was nothing else than the Fourier transform of the correlation function, how the atoms experience the drive field at two different times. So we had here the matrix element, the Rabi frequency squared. We have between time t and t plus tau, the phase of the drive field accumulates e to the i omega tau. But now we have the e to the minus ikr factor. And r, or the position-- in diffusion, you often call it s-- changes now, because the particle undergoes a random walk. So what we have to do is to describe collisional narrowing, we have to take this factor and average it over our ensemble of diffusing particles. And well, diffusion means that if particles start out at t equals 0 at the origin, the probability that we find the particle at time t, a distance s away from the origin-- so I'm depicting this one here. The particle does sort of a random walk. And after time t, it is out at a position s. And the probability for that is e to the minus s squared over 4 Dt. And you see it's s is quadratic over t. This is a random walk. s increases s's square root of time. And the probability is normalized by this expression. So therefore, this red average in the correlation function is calculated by convoluting with this probability distribution for the random walk. So it is e to the minus iks, e to the minus s squared over 4 D. The time is now called tau. We used to integrate over all possibilities. And e to the i omega tau is a common factor. My notes show that the integral is done from minus infinity to plus infinity. Either this is right, or there's a factor of two to be accounted, because well, it depends. Now, is s a coordinate, or is s in radians? In one case, it has a sign, in the other case, it has not. I'm not able to reconstruct it now, but it's just a numerical factor, which would be affected by that. So the result of this integral is that we obtain an exponential function. So what we find out is that there is now an exponentially decaying function. Remember, we had situations where the correlation function for how the atom experiences a coherent field was decaying exponentially because of spontaneous lifetime? And now we have an exponential decay, because the particle is defusing around. But we also know, of course, that exponential decay, when we Fourier analyze it, gives us a Lorentzian. So when we ask what is the rate, Fermi's golden rule's rate expression, the rate for excitation. We want to do the Fourier transform of this correlation function. And what we obtain is, well, a Lorentzian which looks like the Lorentzian for spontaneous emission, except for that we have a different width now. It's a Lorentzian with a width 2 k squared times D. So pretty straightforward. But I hope you've seen and you've enjoyed there's a very intuitive picture. But the correlation function-- if you just look at it from the perspective of the atoms, how do I experience a coherent field? You put in simply the diffusive motion. You can exactly calculate what the line shape is. And that's definitely something you would not have known how to do it without this formalism. OK. I have a question for you. And this is, how does the spectrum for diffusive motion look like? What we have just calculated was that it is a Lorentzian with the line widths we've just determined. But I want to give you another choice. And this is what we discussed at the end of the last class. This picture where we took the confined particles to the ridiculous limit-- where there was no trap anymore, just collisions-- suggested actually that we have some sharp line. But then this envelope of those sidebands give some unresolved pedestal. So the question is, what is correct? We have just done a quantitative calculation using the model, a diffusion propagator. But we also had this intuitive picture, which had more this kind of bimodal distribution. A broad pedestal and a sharp peak. And I want you to think about it for a few seconds. What would you expect to be the correct answer? Is the line shape just a Lorentzian, or does the line shape have two different parts to it? OK. Does somebody want to speak out in favor of his or her choice? Well, one argument is, when you derive something, it must be more correct than when you wave your hands. Therefore, we derived a Lorentzian, and the other picture was just waving our hands and using some analogy. OK. That would be one argument to vote for A. Somebody wants to defend B? Pardon? AUDIENCE: More intuitive. PROFESSOR: More intuitive. Yes. You know, something must be right about it. I mean, in some limit, this must be like a trap, and there should be some sidebands. Yes. OK. But now I would ask you-- Nancy, if you say it's more intuitive, why don't we get those broadening? Why don't we get this extra pedestal in our quantitative derivation? AUDIENCE: [INAUDIBLE]. PROFESSOR: But what? Didn't we lose the exact probability distribution for diffusive process? AUDIENCE: There's no harmonic oscillator. The sidebands [INAUDIBLE] harmonic oscillator. PROFESSOR: So that's an argument also for A. We don't have a harmonic oscillator. And when I said there is sort of this trap-type feature, I'm really over-extending the analogy. AUDIENCE: The mean free path is not-- PROFESSOR: Pardon? AUDIENCE: The mean free path is not infinitesimally small. PROFESSOR: The mean free path is not infinitesimally small. Now we're getting close. We assumed diffusive motion. We put in the exact expression for diffusive motion. And of course, you also know, when you describe a line width, what happens in the middle at detuning 0 is more what happens in the limit of long times. What happens further and further out happens at shorter times. What is the motion of atoms at short times? Before the first collision happens, it moves straight. So the diffusive propagator which we put in is only valid after the first collision. Until the first collision happens, we have free motion. And free motion should give rise to simple Doppler broadening. So until the first collision happens, we should get a little bit of Doppler broadening. But once the collisions happened, we should be very well within the description of the diffusion operator. And to address your concern, actually, in the limit of many sideband, often, the envelope of all those sidebands is actually determined by the Doppler profile. There is a limit, if you have a large modulation index, that you have many sidebands. And in some semi-classical limit, the sidebands have an envelope which is the Doppler profile. So now I think the two pictures agree. If we had used a propagator which would interpolate between the first moment where the particle moves straight, and then diffusion, we would have gotten little pedestals here. And so that sort of tells us that the smeared outside bands, yes, strictly speaking, they are linked to harmonic motion. But they are sort of the leftover of the motional effect, which is the Doppler effect. OK. Good. So we have the correct answer here. And here, it is the model neglects ballistic motion until the first collision. Questions? OK. Let's now spend 10 minutes discussing the fluorescent spectrum of an atom. As I pointed out in my overview, we teach much more about it in 8.422. We use a dressed atom picture. But something would be missing if I wouldn't do it also in this course, because we have discussed to quite some extent what happens when we excite atoms with light. And what we have discussed so far is that we excite atom and scan the laser, or the excitation frequency. And when we are plotting the intensity of the fluorescence, we are looking at the number of scattered photon. And what we are scanning is the detuning of the laser. We would expect, in the case of a motionless atom, simply a Lorentzian, or in the general limit, a power-broadened Lorentzian. Just to be clear, what I'm discussing in those 10 minutes is I'm completely ignoring that the atom can move. So you should think that it's either an atom with infinite mass, or it's an atom tightly localized in the Lamb-Dicke limit, and all we are looking at is the structure of the central peak. So no motional effects. We just look at the pure kind of intrinsic line widths of the electronic transition. OK. So we have discussed that. But now I want to look at another aspect of spectroscopy. And this is another scan we can do. We want to have a fixed detuning. And we look at the spectrum of the emitted light. So let's assume that we have a laser which is at a detuning delta. The laser is fixed. The light is emitting. But we are now dispersing with a spectrograph. We are analyzing what is the frequency of the emitted light. And we determine the spectrum. And yes, the question is, how does it look like? And I want to give you four options. So this is 0 detuning at resonance. Oops, I need one more. Let me relabel it to make it clear. So our option one is our spectrum is a delta function at the resonance of the atom. Option two is it is a Lorentzian centered around the resonance. Option three is it is a delta function at the laser frequency. And option four is it is a Lorentzian with line width gamma. So we have two options. Is it a delta function at omega 0 or omega l? Or is it a Lorentzian-broadened function at either omega 0 or omega l. And since we want to keep things simple, we want to first discuss the case of the perturbative case that the laser which excites the atom has very low power. So what would you expect? An atom is excited, it's a little light bulb. You analyze the spectrum of the light bulb. Which of the four spectrum will you measure? OK. Let's try again. [LAUGHTER] And maybe I should, like our online learning system, try to give you one hint. I want you to really think hard what energy conservation means in this problem. OK. OK, we're getting closer. Now, the way how I want you to think about it is if you take the limit of low power, you should really think about it that there is only one photon in your whole laboratory. This photon is scattered of an atom, and then you measure the frequency of the photon. And energy conservation clearly tells us that the frequency of the scattered photon cannot be at the resonance frequency. It has to be at the laser frequency. Otherwise, we would violate energy. But it seems-- still, the more subtle thing is, is it a delta function? Of course, a delta function will be broadened. If you do the experiment for one second, you will have a Fourier line width which is 1 hertz. But that's, for practical reasons, almost like a delta function, because the spectral line widths, or the natural line widths of your favorite atom is 10 megahertz. So there will be some temporal broadening, which is in case in any realistic experiment. But we are talking about, are you limited by the experiment and by technical noise, or are you limited by the natural line widths? Now, I would argue, just with energy conservation-- but I'll give you another argument in a second-- that if I have a monochromatic laser, the photon which has to come out has to be exactly the same frequency. Because we talked about energy conservation. And if I would start with a photon at the laser frequency and I would measure a photon which has 10 megahertz away, I would violate energy conservation by 10 megahertz. So the argument is actually correct. It's a delta function at the drive frequency. But I want to give you another argument. It's, again, something you have heard in the course. But you should really take it seriously, and apply to that situation. And that's the following. Remember when we talked about the AC stark effect. I told you that you should really think about the atom as a harmonic oscillator. We even introduced for you the oscillator strengths that we could even quantitatively describe the response of the atom to the scattering of light by pretty much a model where we have a mechanical model where an electron is attached with a spring to an origin. OK. Let me now paraphrase the question. Well, I've given you the answer. Before giving you the answer, I should have asked you the following-- if you now make the assumption that you have a harmonic oscillator, the harmonic oscillator has a resonance frequency of omega 0. And it has a damping rate of gamma. But we are driving the harmonic oscillator not at omega 0; we were driving it at a frequency omega l. At what frequency does the harmonic oscillator respond? At the drive frequency or at its own frequency? AUDIENCE: Drive. PROFESSOR: Of course. It's a drive frequency. And if you have a CW experiment, I said let's wait a second, let's really do a long experiment, if you have a driven harmonic oscillator, and you drive it for a long time, and you analyze the spectrum of the motion, is the frequency spectrum of the driven harmonic oscillator absolutely sharp at the drive frequency? Or is it broadened by gamma, the damping constant of the harmonic oscillator? AUDIENCE: Sharp. PROFESSOR: Who thinks it's sharp? Who thinks it's broad? Hands up for sharp. Hands up for broad. OK. AUDIENCE: [INAUDIBLE]. PROFESSOR: Look, what you are overlooking is the following. There is the difference between a transient, which is damped out at a rate gamma. But then there is the CW response. If you have an ultra-weak drive, you leave it on for an hour, the harmonic oscillator will reach steady state. And it will just oscillate driven by your drive. And it's actually monochromatic. And if you analyze the motion, it's a harmonic oscillator in steady state with a drive, and it just moves with a fixed amplitude and fixed frequency. I mean, the way how I shake my hands, this is a delta function at the drive frequency. So in this simple harmonic oscillator model, gamma and damping comes in when you do what I discussed earlier, when you change the drive frequency. But this is a completely different experiment. We are not changing the drive frequency and looking for the response. We are a fixed drive frequency, and we are analyzing what the motion is which comes out. And then, as you know from differential equation, you may have a grade transient. You may have a transient at the resonance frequency which dies out with a rate gamma. This is when you suddenly switch on your drive, and you're not adiabatically switching on your drive. But this is just a transient. But what we are asking here when you do a CW experiment, you drive it for a long time, you look what happens. And the motion of the driven harmonic oscillator is a delta function at the drive. And what is valid for a harmonic oscillator is also valid for the atom, pretty much for the same reasons. Any questions? Yes. AUDIENCE: Regarding the energy conservation in D, wouldn't the photon processes explain energy conservation? PROFESSOR: OK. You are now really asking-- Nancy is asking about two-photon processes. Well, we don't want to stop here. We want to see something more interesting. This is just sort of the trivial, simple harmonic oscillator. The question is now, if you want to see something richer, if we want to see a little bit of broadening, or we want to see something which is not as boring as just a classical harmonic oscillator, what do we have to do? AUDIENCE: Increase the strength. PROFESSOR: Increase the strengths of the field. And I heard somebody saying, two photons. And that may be-- AUDIENCE: Next-order perturbation theory. PROFESSOR: Pardon? AUDIENCE: It's like next order perturbation theory. PROFESSOR: Next-order perturbation theory. But just in the sort of intuitive picture, if you have two photons which come quickly enough-- and how quickly the two photons come will be debated. Of course, it's parametrized by the Rabi frequency, the strengths of the drive. But if two photons come quickly, each photon has to be scattered with a delta function because of energy conservation. But if we have two photons, suddenly, it's possible that one is scattered here and one is scattered here. And we still conserve energy. So if you want to see some form of line broadening, if you want to see all the things you have mentioned, we have to go away from the low power limit. And this is what we do next. So we assume that we have higher power. We are on resonance. And let me assume that the Rabi frequency is larger than gamma. So what is the physics now of the atom if the Rabi frequency is larger than gamma? It can do Rabi oscillation before it's damped. So now we have a system which has an internal dynamics at the Rabi frequency. And you know if you have an emitter, if you have an antenna which emits light, and you move the antenna around at a frequency-- we had the example of a trapped particle which has a trapped frequency of omega trap-- the spectrum, if you Fourier transform, it leads to the basal function, it leads to the sideband. So therefore, we know now, based on all the analogies, if you have a modulated emitter, we obtain sidebands. That's what we would expect classically. So let me now ask you the three following possibilities. Maybe we still get a delta function. Maybe we get three peaks where the splitting is the Rabi frequency. So the two outer peaks have a-- and the third option is that yes, we observe the Rabi frequency, but the Rabi frequency is the splitting between the outer peaks. So the first and second answer differ by a factor of 2 in the sideband spacing. So we have three choices. Do we expect sidebands? That's A and B. Or would you expect that we still have a delta function, because, well, maybe energy has to be conserved at the single photon level? And then is the splitting Rabi frequency on each side, or Rabi frequency between the two sidebands? All right. Yes. It's indeed the situation. For those who picked A, it's pretty much the definition of what the Rabi frequency is. After one cycle of the Rabi frequency, the atom has gone from the ground state to the excited state and back to the ground state. So the model you should have is that you have an object which is emitting light, but the object has some internal modulation at the Rabi frequency. Whatever factor of 2 you had in the amplitude versus probability or something, all this has been factored into the definition of the Rabi frequency in such a way the atom is really blinking between ground and excited state with a frequency which is the Rabi frequency. And this frequency leads-- and the spectrum is now the sum and difference of the relevant frequencies. And the relevant frequency is the resonance frequency and the Rabi frequency. So the answer is B. But this was sort of more in form of a stick diagram. Now I want to bring in in the next question the line broadening. So the question is-- and there are again four choices. The question is, we have now our three sticks. One is a carrier, and these are the two sidebands split off by a Rabi frequency. Are all three sticks now sharp delta functions? Are all three sticks broadened by the natural line widths? Is the central part sharp, and only the sidebands are broadened? Or do we have a sharp stick with a pedestal, and then two sidebands? I'm not actually expecting all of you to know the answer, because this is really now getting into more subtle things. But just in terms of show-and-tell, and attract your curiosity to the second part of the course, what would you expect? OK. Yes. Good. A is definitely eliminated. I mean, if we scatter two photons simultaneously, there will be sort of broadening of gamma, because the atom has a natural broadening. So we can't expect that the two photon processes are sharp. There's no reason to expect that. The answer between B, C, and D depends now. If we have very, very little power, what we observe is, of course, the low-power delta function. And then we observe very, very small sidebands which are broadened. So answer C is correct. If you just think about it-- what is the structure of infinitesimal peaks? But when you crank up the power, and what you have is you have an elastic scattering peak, which has a delta function, energy conservation at the single photon level. And then you have those-- they are called inelastic peaks. But when you crank up the power, then the central feature has also an inelastic component. You can sort of argue that in Rabi oscillation, you are in the ground state, excited state, ground state, excited state. But if you do now light scattering in the excited state, you automatically broaden it by the lifetime of the excited state. And if you crank up the power higher and higher, the elastic peak will be more and more suppressed. And you find actually a spectrum which has only three broadened peaks, and the delta function has disappeared. It will actually be something where we need more knowledge. The broadening of those peaks is not gamma anymore. One of them will be 3/2 gamma, the other one will be 1/2 gamma. But this now really requires a deeper understanding in terms of the atom picture. But the scale of all the broadening is a factor on the order of unity times gamma. Any questions? So the general answer, sorry, would be-- the generic picture which you should keep in your mind is that. You have inelastic scattering on all three peaks. But the limit of D at very low power is that. And at very high power, is that. The elastic component of the central peak is either 100% or 0%. OK. So our last topic for line shapes and line broadening is pressure broadening. And as I mentioned in the introduction, pressure broadening has made modern appearances in the form of clock shift, and mean field shifts, and mean field broadenings in Bose-Einstein condensates. But let me sort of tell you how you should understand pressure broadening. I'm using here the semi-classical picture that an excited atom acts as an oscillator. And what happened is the atom-- you can say the oscillation is superposition of ground and excited state. It oscillates. But after some time tau, there may be a quenching collision, a de-excitation collision. Then the atom is sort of in the ground state. And then it waits until it's excited again by the laser, and the atomic dipole is oscillating again. In a situation like this, you would expect that the total line widths is actually the sum of two rates. One is the spontaneous emission rate, and the other one would be the collision rate, which is this. And in general, this is sort of how people looked at it. You vary the pressure in your buffer gas cell, and you find that the line width has a component which increases linearly with pressure. So this is one model how you can imagine what happens in a collision. These are like knock-out collisions. When an atom collides, the excitation, the energy disappears. The atom is quenched to the ground state. Well, we can draw up another model, where we have an oscillation of our atomic oscillator. But then after the same time tau, there is now a hiccup in the phase. It collides with another atom. The other atom is not de-exciting. It's not removing the energy. But after the collision, the atom continues to oscillate, but with a very, very different phase. And sort of what I assumed here is this time where the collision happens is very short. It's a very short collision time. That I can approximate the collisions as simply imparting random phase jumps to the atom. Well, if you ask, what is the widths of the spectroscopic line, it's exactly the same result. It just means we have a different rate. It's now a de-phasing rate, which is 1 over tau. And this is added to the spectroscopic widths. OK. This is sort of just creating some phenomenological picture. Let's now ask a little bit more microscopically. How can it happen? How can it happen that there is a phase change, that there's a change in the phase of the atomic oscillator? And this leads us now to consider the interaction potential as a function of r between two atoms. We have one atom which is sort of our active atom, which has an excited and ground state. And this atom is now getting closer to, let's say, an argon atom, which acts as buffer gas. And what I'm plotting in this graph now is the interaction potential between our light-emitting, or light-absorbing, atoms and the buffer gas atom. And let's just genetically assume there is sort of something like a molecular potential. But in general, the molecular potentials will be different in the ground and excited state. And of course, if you do hyperfine transitions between atoms in a Bose-Einstein condensate, you may have a scattering length, which is different in the two states. And I think you see the connection. So in general, if the interaction environment is different between ground and excited state, we expect that the interaction potential causes certain shifts. So we could use the picture that we have a phase evolution. And the phase evolution is a frequency difference which is simply given by the difference of the two potentials. So the picture is a little bit-- which is actually very valid for cold collisions, which-- well, really a new chapter in atomic physics opened up by laser cooling and magneto-optic traps. An atom, when it emits a photon, would emit here at the resonance frequency. But if the collision is very, very slow, it will actually emit, with a certain probability, a photon which is shifted by exactly this expression. And in that limit, that could be very interesting, because by analyzing the spectrum of the immediate light, you learn something about the interaction potential of two atoms. But of course, this argument has a little bit of a flaw, because you can observe a frequency not instantaneously. You can observe-- if a collision happens very fast, you would actually go through those frequency changes so fast that you cannot resolve them. The question is, which one is larger? So if the frequency shift is larger than the inverse time for the duration of the collision, then you can observe it. Otherwise, you can't. So the picture we should draw is now the following. And this is sort of a microscopic picture on those phase jumps, which I mentioned earlier. That you have your atomic oscillator. The atom is oscillating. But now it comes close to another atom. And let's assume there is an energy increase between ground and excited state due to the presence of the buffer gas atom. Then you would say you get a quick oscillation. Depending on the impact parameter, it can last various amount of time. And then later, the oscillation starts. But essentially, this causes a random phase jump. So this time here is the collision time tau c. What we expect now is we have to now interpolate between two models. I'm not taking it any further than that. We have an interesting line shape. There may be a central part which is simply collisionally broadened by the interruptions of the phase, by the number of collisions with the collision time tau. But if when the atoms collide, there is a huge frequency shift momentarily, that this will actually affect the wings. So for frequency shifts which come from the interaction potential which are much larger than the collision time, than the time between collisions, we will observe something which goes by the name "far wing broadening." And also, the central part of the potential is simply, you can say, it's a Lorentzian which is simply broadened by the collision rate. There is no microscopic information-- what is the nature, what is going on during the phase jumps. The wings will have actually information about the molecular potential. And I just wanted to present it to you in this way to sort of show how you actually have, in a line shape, often two effects. One is pretty much just the interruption of the coherence which gives the central line. But you still have, at least in some limit, information about what causes those phase jumps. And this appears in the wings. 20 years ago in the atomic physics course, we taught you a theory how to describe that. It could be probably taught in one hour. There are some links on the Wiki, but I don't want to carry that further. Any questions about that? AUDIENCE: So why has the importance decreased? PROFESSOR: The importance has decreased of that, because I mean, who of you is studying atoms and gas cells? Nobody. The frontier has moved on. We are much more in a regime where we are not observing atoms in certain environment. We are creating an interesting system out of atoms by putting the atoms in a well-defined environment where those things are absent. Or in the ultra-cold domain, and de Broglie wavelengths is so long that this kind of model for collisions is no longer applicable. We are in the extreme case of a single partial wave, where a single parameter, the scattering lengths, describes all of it. OK. Cory? AUDIENCE: Yeah. Could you talk for a little bit about how this model of collisions breaks down and allows for Dicke narrowing? PROFESSOR: Yes. Thank you. That was one comment I forgot. What happens is you have Dicke-- thanks for this question. You have Dicke-- I drew up the two potential curves. If the two potential curves are absolutely identical, the atoms can approach each other and can collide. But there is never any perturbation to the atomic oscillator, because the frequency between ground and excited state is omega 0, no matter whether the atom undergoes a collision or not. In this limit, which is often realized in collisions with rare gases, in this limit, you do not have any phase interruption by the collision. And this is the prerequisite for Dicke narrowing. OK. I think we have the full picture now. And that means we can move on to two-photon excitations. So whenever you start a new chapter, you should motivate it. And the question is, why should you be interested? Well, I can also turn it around and said, why not? Because those things happen. And if those things happen, we want to learn about them. And they actually happen very naturally. The moment you go to higher laser power and you go beyond the lowest order of perturbation theory, you may actually excite two photons at the same time. And actually, what we just discussed about the emission spectrum of an atom, with the sidebands, with the broadening, these are actually examples where we really have to think about it in a two-photon picture, not a single-photon picture. The second motivation is very practical. We may want to excite an atom from a low-lying level to a high-lying level. But the only laser we can buy, or the only laser we have in the laboratory has lower frequency. That doesn't mean that the case is lost by just stacking two photons on top of each other. We can bridge the gap and still excite the atom, which has transitions only far in the UV. And all we have is these visible lasers. There's another thing which changes. Maybe we want to excite hydrogen from the 1s to the 2s level. 1s and 2s have the same parity. And if we try to excite it with one photon, because of the dipole operator which has odd parity, this is not allowed. So there are maybe selection rules where we can go to desirable final state only with two photons and not with one photon. Finally, I will show you on Wednesday-- remember, there is no class on Monday, because the whole CUA will go crazy with the NSF site visit on Monday, Tuesday. So the next class on Wednesday. And on Wednesday, I will actually show you if the two photons come from counter-propagating beams, the net momentum transfer to the atom is 0. And net momentum transfer of 0 means zero Doppler shift. So two photons give us an opportunity, which doesn't exist with the single photon. We can excite the atom without transferring momentum. And this is the basis of a Doppler free spectroscopy technique. And finally, for purely conceptional reasons, we've talked so much about excitation of an atom, and then ask, what is the rate of light scattering? So we have actually used two-photon processes in the course many, many times without actually adequately addressing them. I didn't tell you anything wrong. I was sort of choosing my words carefully, that everything I told you about light emission and absorption is comparable with the correct picture. And the correct picture is if a photon goes in and a photon goes out, you have to describe it as one process involving two photons. So I think we have enough reasons why we should be interested. And if I take this picture, that I start in one state and have a two-photon absorption process, I actually want to redraw it, because in the dipole approximation, which I want to use here, we have the dipole operator, and second quantization is a plus a dagger. The electric field is a plus a dagger. So each application of the operator takes us from one state to the next with a single photon. So therefore, if you talk about two photons, we can only absorb them if we have an intermediate state, which I will call f. So we should really think about two-photon absorption as a two-step process which involves an intermediate state. And it has to be like that if we want to use, as the operator for atom light interaction, the dipole operator, because a dipole operator is creating and-- the dipole operator involves electric field. And that creates or annihilates one photon at a time. Let me just make a side remark, but this will be addressed in 8.422. If you use the description of the interaction with the electromagnetic field, which is the p minus e Hamiltonian, and you square it, if you square that, you get an A squared term. The A squared term is actually the product of a plus a dagger squared. So you can actually, with the A squared term, scatter two photons by going from one state to another one. But I'm not discussing it here. And I leave a detailed comparison and discussion of that to 8.422. Now we are strictly adhering to the dipole approximation. Oh, actually, let me make a comment about it, because I just realized if I say something and don't say it fully, I may confuse people. So what I want to say is, we will show, in 8.422 that the two pictures, the pap minus a Hamiltonian and the dipole Hamiltonian are equivalent. They're connected by canonical transformation. So therefore, it is not a fundamental aspect of nature, whether you can scatter only one photon when you go from one state to the other. There are two equivalent descriptions. In one description, you sometimes scatter two photons. In the other description, you scatter only one photon. And you will all get a PSET in 8.422 where you show for one example that the results are the same when you sum up over all possibilities. So therefore, I can maybe here should rather take the position, the generic description of light scattering uses the dipole approximation. And in the dipole approximation, we are describing the atoms, that they only exchange one photon when they go from one state to the next. And this is a full description but not the only description. So therefore, we need an intermediate state. But assuming that the first laser is not resonant with the transition to the intermediate state, we need this dashed line. This dashed line is sometimes called a virtual state. And in the following discussion, we will really learn what is the nature of the virtual state and what does it mean. But we need a stepping stone for a two-photon process in the form of intermediate states. So maybe I should just use three minutes and show you how you would calculate it. I use pre-written slides here, because I'm getting a little bit bored of just writing down the same or similar perturbative expressions. And I just step you through. So I said we use the dipole operator. But now-- and this is a new thing-- the electric field has not only one component and one frequency. We want to look at two photons, so therefore, it has two frequencies. So therefore, our perturbation Hamiltonian is what we had so far for monochromatic field. But then we have an additional term where we just change the index from one to two to have the second laser field described. So let me now introduce matrix elements. We have to go from state a to some intermediate state. And then we go to state b. So therefore, we need matrix elements which take us from state a to state f. And again, we have two such possibilities-- one at omega 1, one at omega 2, one driven by field 1, the other one driven by field 2. And yeah, we all have the counter-rotating and co-rotating terms. What happens is we have, of course, a complex conjugate. But I told you already several times that there is an e to the minus i omega 1, which is responsible for absorption. The plus i omega 1 does emission. And let me just-- I want to do the rotating wave approximation, only keep the relevant term. So I look at two-photon absorption. And that means I only keep forward the minus terms. You could, if you want, duplicate the lengths of each formula and carry forward the counter-rotating term. You're not learning anything new. You get additional Bloch-Siegerts and AC stark shifts. This is not the new feature I want to implement. So let's focus on the new aspect. And this is the following. We do first-order perturbation theory, which takes us in the first step to the intermediate state. And you have seen this expression many, many times. The only thing is in addition to what we had for one laser beam, we have to add a second possibility, which comes from the second field. And now we are at the intermediate step. And we want to take the second step to the final state. So what we are now doing is we are writing down Schrodinger's equation, derivative of the wave function is Hamiltonian times the wave function. And we are especially interested in how do we accumulate probability or amplitude in the final state. But what we are doing is on the right-hand side, because we can't start in the ground state, we've gone to the intermediate state, we are now plugging in the previous result, the first-order result for the intermediate state. So therefore, by using second-order perturbation theory, we want the second-order perturbation theory, so we integrate this equation with respect to time. But on the right side, we use our first-order result, which we had derived earlier. And then we just write down the integral. Everything is just exponential function. And we get the result here. So now with two steps, we have obtained an expression for-- I know time is over, but let me just finish the argument. So we have now an expression in second-order perturbation theory. What is the amplitude in the final state? And things look a little bit messy, because we have four terms. And we should get four terms, because we have two interactions. We can take one photon of one laser beam-- first step, omega 1; second step, omega two. We can switch the order of photons. And of course, if you write down everything correctly, nothing is forbidding the atom of taking both photons out of the same laser beam. Yeah, let me stop here.
MIT_8421_Atomic_and_Optical_Physics_I_Spring_2014
5_Resonance_V_and_Atoms_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: The last topic we discussed on Monday it was the situation of the Landau-Zener transition that you sweep for resonance. You've all seen the Landau-Zener formula, you all know that in crossing turns into an avoided crossing. But I tried to at least provide you additional insight by emphasizing that the whole process is absolute coherent. That it's full-phase coherent throughout. And what happens is there is a coherent transfer of amplitude from the state 1 to the state 2. This is nothing else than Schrodinger's equation. But I want to point out that for the short times you sweep through, there is no T2 independence here. In other words, when I discussed what is the effective time during which the transfer of population takes place, at the time here is so short that the detuning doesn't matter. Actually, the criterion which leads to this effective time during which the transfer takes place is actually exactly the time window where the detuning is small enough that-- to say it loosely, it doesn't make a difference whether your in resonance or slightly away. The atom experiences the same tri field. And so based on this criterion, we discussed that we can understand the Landau-Zener probability in the perturbative limit as a coherent process where we transfer population, we transfer amplitude with a Rabi frequency during this effective time, delta t. It's not the only way how we can look at it, but it's one way which I think is insightful. Any questions about this or what we discussed on Monday? If not, I would like to take the topic one step further and discuss the density matrix formalism. So we have so far discussed purely Hamiltonian unitary evolution. Namely, the Schrodinger equation. And of course, unitary evolution leaves the system which is in a pure state in a pure state. It's just that the quantum state evolves. However, that means we cannot describe processes like decoherence or some losses away from the two levels we are focusing on. And so now we want to use the density operator, the density operator formalism to have a description of two-level system which goes beyond that. So let me just-- so Schrodinger equation deals only with pure states, cannot describe loss of particles, loss of photons, and decoherence. Well, there is one exception. If the decoherence process is merely a state-dependent loss of atoms to a third state, then you can still use the wave function formalism. So this is the exception. If you have two states, that's the excited state. If you have two states and all of what happens is that you have some loss to some other levels and their rate coefficients, then one can still use a Hamiltonain description, but you have to replace the eigenvalues by complex numbers. In other words, you have to add an imaginary part to the energy levels. And that means the time evolution is exponentially dent. So that's as much as you can incorporate decoherence and losses into a wave function formalism. However, many other processes require the formalism of the density matrix. And the simplest process where wave function formalism is absolutely inadequate is the process of spontaneous emission. When you have a loss in the excited state, you could still describe the excited state with a complex energy eigenvalue. But the fact that whatever is lost from the excited state is added to the ground state. There is no wave function formalism which can describe that. So for those processes and for decoherence in general, we require the use of the density operator. So I know that most of you have seen the density operator in statistical mechanics or some advanced course in quantum mechanics. So therefore, I only spend about five minutes on it. So I want to kind of just remind you or give you a very short introduction. So for those of you who have never heard about it, I probably say enough that you understand the following discussion. And for those of you who know already everything about it, five minutes of recapitulation is hopefully not too boring. So my way of introducing the density operator is first introduce it formally. Write down a few equations for pure state. But then in a moment, add something to it. So if you have a time-dependent wave function, which we expand into eigenfunctions, then we can, in these spaces, define arbitrary operators by matrices. We want to describe our system by finding out measurable observables, expectation values of operators, which of course, depend on time. And this is, of course, nothing else than the expectation value taken with a time-dependent wave function. But now we can expand it into the bases m, n and we can then rewrite it as a matrix, which is a density matrix. Or, simply as the trace of the density matrix with the operator. And what I introduced here as the density matrix can be written as psi of t, psi of t. And the matrix element given by this combination of amplitudes when we expand the wave function psi into its basis. So this density matrix his diagonal and off-diagonal matrix elements. The diagram matrix elements are called the populations, the populations in state n and the off-diagonal matrix elements are called coherences. OK, so this is just rewriting Schrodinger's equation expectation value in a matrix formalism. Yes, please. AUDIENCE: Why are you starring the coefficients? PROFESSOR: Oh, there's one star too many. Thank you. AUDIENCE: That makes sense. PROFESSOR: But the reason why I wrote it is that we want to now add some probability to it. We do not know for sure that the system is in a pure state. We have probabilities P i that the system is in a quantum state psi i. So we add another index to it. And when we perform the expectation value-- there's also one star too many. When we perform the expectation value, we sort of do it for each quantum state with a probability P i. So we are actually-- and this is what I wanted to point out. This was the purpose of this short discussion, that we are now actually performing two averages. One can be regarded as the normal quantum mechanical average when you find the average value or the expectation value for quantum state. So this is sort of the statistics or the averaging, which is inherent in quantum physics. But then in addition, there may simply be another probabilistic average because you have not prepared the system in a pure state, or the system may undergo some stochastic forces and wind up in different states. So there are two kinds of averages which are performed. And the advantage of the density matrix formalism is that both kinds of averages can be done simultaneously in a very compact formalism. So therefore, I put this probabilistic average in now into the definition of the density matrix. Or I can write the density matrix in this way. And with this extended definition of the density matrix, both kinds of averages are done when I determine the expectation value and operator by performing the trace with the density matrix. A lot of properties of the density matrix, I think are-- you're familiar with many of those. For instance, that Schrodinger's equation for the density matrix becomes this following equation. You can derive that by-- if you take the Schrodinger equation and you apply the Schrodinger equation to each state psi i. And then you do the averaging with a probability P i. You find that the Schrodinger equation for each state psi i turns into this equation for the density matrix. Let me just write down that the purpose now is we have two averages here. One is the quantum mechanic average and one is sort of an ensemble average with the probabilities P i. I want to discuss in a few moments the density matrix for two-level system. So I have to remind you of two properties, that the density matrix is normalized to unity. So there's probability of unity to find the system in one of the states. When we look at the square of the density matrix, a trace of rho square, this is simply the probability-- the sum of the probability squared. And this is smaller than 1. And the only way that it is one is for pure state. So pure state is characterized by the fact that there is only-- that we can find the basis where only one of the probabilities, P i is non-vanishing. And then, of course, almost trivially the trace rho is 1 and the trace rho square is 1. So, so far I've presented you the density matrix just as an elegant way of integrating the two averages into one formalism. And in essence, this is what it is. But you can now also use the density matrix if the whole system undergoes a time evolution, which is no longer unitary. No longer described by a Hamilton operator. Because you're interested in the time evolution or a small system which is part of a bigger system. The bigger system is always described by unitary time evolution, but a smaller system is usually not described by unitary time evolution. And that's when the density matrix becomes crucial. Of course, you can see this is just you describe the smaller system and you do some probabilistic average what the other part of the system does. And therefore, it's just another version of doing two averages. But this is sort of why we want to use the density matrix in general. So we want to use the density matrix for non-unitary time evolution. And the keyword here is that this is often the situation for open systems where we are interested in a small system, but it is open to a bigger system. Like, we're interested to describe an atom, but the atom can spontaneously emit photons into other parts of [INAUDIBLE] space. And we're not interested in those other parts of [INAUDIBLE] space. So an open system for this purpose is where we limit our description to a small part of a larger system. Again, an atom interacting with all the modes of the electromagnetic field, but we simply want to describe the atom. And then, we cannot use a wave function anymore. We have to use the density matrix. OK After these preliminaries, I want to now use the density matrix formalism for arbitrary two-level systems. So what is the most general Hamiltonian for the most general two-level system? Well, the most general Hamiltonian is the most general Hamiltonian we can construct with 2 by 2 matrices. And the base is set to expand the 2 by 2 matrices are the Pauli matrices. So if you expand the Hamiltonian into the unity matrix, sigma x, sigma y, and sigma z, we have four coefficients, four amplitudes, which are complex in general-- omega 1, omega 2, omega 3. And here is something which I've called omega bar. By appropriately shifting what is the 0 point of energy, we can always get rid of this. So this is just this definitional character. So therefore, the most general Hamiltonian for any two-level system can be written in this very compact way that it is the scalar product of the vector omega-- omega 1, omega 2, omega 3-- with the vector sigma of the three Pauli matrices-- sigma x, sigma y, sigma z. OK, so this is a way to write down the most general Hamiltonian for a two-level system. Now, we describe two-level systems by a density matrix, by statistical operator, which is also a 2 by 2 matrix. And the most general density matrix can also be expanded into the its four components. Sort of the basis set of matrices is the unitary matrix and the three Pauli matrices. So 1, 2, 3. Of course, this time we cannot throw away the unity matrix because otherwise the density matrix would have no trace and there would be no probability to find the particle. But we can, again, write it in a compact form that it is 1/2-- yes, I'm using the fact now that the trace of rho is r0. And this is, by definition, or by conservation of probability, is 1. So therefore, r0 is not a free parameter. It's just the sum of all the probabilities to find the system in any state. And the non-trivial part is then the scalar product of this vector r-- rx, ry, rz-- with the vector of the three Pauli matrices. Well, so we have our most general Hamiltonian. We have our most general density matrix. And now we can insert this into the equation of motion for the density matrix. Which, as I said before, is just a reformulation of Schrodinger's equation. And if you insert the Hamiltonian and the density matrix into this equation, we find actually something which is very simple. It says that this vector r, which we call the Bloch vector-- the derivative of the Bloch vector is given by the cross product of the vector omega, which were the coefficients with which we parametrized the Hamiltonian cross r. The derivation is straightforward. And you will be asked to do that on your homework assignment number 1. But it has a very powerful meaning. It tells us that an arbitrary two-level system with an arbitrary Hamiltonian can be regarded as a system where we have a vector R which undergoes precession. This is the time evolution of the system. So this is a powerful generalization from the result we discussed previously where we found that if you have an arbitrary quantum-mechanical spin, the time derivative can be written in that way. So previously, we found it for a pure state, but now we find it-- that it's even valid for a general density matrix and its time evolution. So what I've derived for you is-- Classroom Files. Is a famous theorem, which is traced back to Feynman, Vernon, and Hellwarth. It's sort of a famous paper which-- So this famous theorem-- and I've summarized it here for you-- says that the time evolution of the density matrix for the most general two-level system is isomorphic to pure precession. And that means it's isomorphic to the behavior of a classical moment, classical magnetic moment, in a suitable time-dependent magnetic field. So when you have a Hamiltonian, which is characterized by-- the most general Hamiltonian is characterized by the three coefficients-- w1, w2, w3. But if you would create a classical system where w1, w2, and w3 are the time-dependent components, xyz-component of a magnetic field, then the precession of a magnetic moment would be exactly the same as the time evolution of a quantum-mechanical density matrix. Any question? So in other words, we've started out with rotating frames and rotation and now we've gone as far as I will go. Namely, I've in a way told you that an arbitrary quantum-mechanical two-level system, the time evolution is just precession. It's rotation. There is nothing more complicated possible. Well, unless we talk about decoherence. If we have such a Hamiltonian, we know, of course, that a pure state will stay pure forever. And you can immediately verify that if you look at the trace of rho square. If the trace of rho square is 1, we have a pure state. And now we have parametrized the density matrix with the Bloch vector component-- r1, r2, r3. So in those components, the trace of rho square can be written in this way. And of course, r0 square was constant. This was our normalization of 1. So the question is now when we have an arbitrary time evolution, which we know now according to the Feynman, Vernon, Hellwarth theorem. The arbitrary time evolution of the Bloch vector can be written as omega cross r. So this equation tells us immediately that the length of the vector r is constant because r dot is always orthogonal to r. And therefore, the lengths of the vector r is not changing. So what we have derived says that with the most general Hamiltonian, the lengths of the vector r will be constant. And therefore, the trace of rho square will be constant. This is constant because r dot is perpendicular to r. So this will tell us that a pure state will just precess with the constant lengths of its Bloch vector forever. However, we know that in real life some coherences are lost and now we have to introduce something else. So this does not describe loss of coherence. So now we are just one tiny step away from introducing the Bloch equations. We will fully feature the optical Bloch equations in 8.422. but since we have discussed two-level systems to quite some extent, I cannot resist to show you now in three, four minutes what the Bloch equations are. And then when you take the second part of the course, you will already be familiar with it. So let me just now tell you what has to be added to do this step from the previous formalism to the Bloch equations. And this is the one step you have to do. We have to include relaxation processes into the description. So my less than five-minute way to now derive the Bloch equations for you goes as follows. I first remind you that everything has to come to thermal equilibrium. In other words, if you have an atomic system, if you have a quantum computer, whatever system you have and you prepared in a pure state, you know if you will wait forever, the system will be described by a density matrix, which is the density matrix of thermal equilibrium, which has only diagonal matrix elements. The populations follow the [INAUDIBLE] factor. And everything is normalized by the partition function. So we know that this will happen after long, long times. So no matter with what density matrix we start out, if we start with a density matrix rho pure state, for instance, there will be inevitably some relaxation process which will restore rho to rho t, to the thermal equilibrium. Now, how this happens can be formulated in a microscopic way. And we will go through a beautiful derivation of a master equation and really provide some insight what causes relaxation. But here for the purpose of this course, I want to say, well, there is relaxation. And I want to introduce now, in a phenomenological way, damping and damping times. So the phenomenological way to introduce damping goes as follows. Our equation of motion for the density matrix was that this is a unitary evolution described by-- the Schrodinger equation was that the density matrix evolves according to the commutative with the Hamiltonian. But now-- and I have to pick big quotation marks around it because this is not a mathematically exact way of writing it. But now I want to introduce some term which will damp the density matrix to the thermal equilibrium density matrix with some equilibration time, Te. I mean, this is what you can always do if you know the system is damped. You have some coherent evolution, but eventually you added a damping term and you make the damping term-- you formulate in such a way that asymptotically the system will be damped to the thermal equilibrium. In other words, the damping term will have no effect on the dynamics once you've reached equilibrium. So it does all the right things. Of course, we have to be a little bit careful because everything is either an operator or matrix. And I was just adding the damping term as you would probably do it to a one-dimensional equation. So therefore, let me be a little bit more specific. In many cases, you will find that there are two distinctly different relaxation times. In other words, the system will have usually at least two physically distinct relaxation times. They are traditionally called T1 and T2. T1 is the damping time for population differences. So this is the damping time to shovel population from some inverted state or some other state into the equilibrium state. That usually involves the removal of energy out of the system. So it's an energy decay time. And if you would inspect our parameterization of the Bloch vector, population or population differences are described by the z-component, the third component of the Bloch vector. Well, we have other components of the Bloch vector which correspond to coherences. The off-diagonal matrix element of the density matrix. And they're only nonzero if you have two states populated with a value-defined relative phase. When the system, quantum mechanical system, loses its memory of the phase, the r1 and r2 component of the Bloch vector go to 0. So therefore, the time T2 is a time which describes the loss of coherences, the dephasing times. And in most situations, well, if you lose energy, you've also lost-- if you lose energy because you quench a quantum state, you've also lost the phase. So therefore in general, T2 is smaller than T1. Often by a lot. So with those remarks about the two damping times, I can now go back to the equation at the top, which was sort of written with quotation marks, and write it in a more accurate way as a matrix equation for the damping of the components of the density matrix expressed by the Bloch vector. In other words, the equation of motion for the z-component of the Bloch vector, which is describing the population, has a coherent part, which is this generalized precession. And then, it has a damping part, which damps the populations to the equilibrium value with a damping time T1. And then we have the corresponding equations for the x and y, or the 1 or 2 component of the optical Bloch vector. We just replace the z index by x and y from the equation above, but then we divide by a different relaxation time, T2. So what we have found here, these are the famous Bloch equations, which were introduced by Bloch in 1946. Introduced first for magnetic resonance, but they're also valid in the optical domain. For magnetic resonance, you have a two-level system, spin up and spin down. In the optical domain, you have a ground and excited state. In the latter case, they're often referred to as the optical Bloch equations. Any questions about that? Yes, please. AUDIENCE: So what determines [INAUDIBLE]? PROFESSOR: Well, that's a long discussion. We spent a long time in 8.422 discussing various processes. But just to give you an example, if you have a gas of atoms and there are slightly inhomogeneous magnetic field, that would mean that each atom, if you look at it as precession motion, precesses at slightly differently rates. And the atoms will decohere. They all will eventually wind up with a different phase, that if you look at the average of coherence, it's equal to 0. So any form of inhomogeneity, which is not quenching a quantum state, which is not creating any form of de-activation of the excited state, can actually decohere the phase. And these are contributions to T2. So often, contributions to T2 come from inhomogeneous environment, but they are not changing the population of states. Whereas, what contributes to T1 are often collisions. Collisions which, when an atom in an excited state collides with the buffer gas atom, it undergoes a transition from the excited to the ground state. So these are two distinctly different processes. One is really a collision and energy transfer. Each atom has to change its quantum state. Whereas, decoherence can simply happen that there is a small pertubation of the energy levels due to external fields. And then, the system as an ensemble loses its phase. In the simplest way, you can assume inhomogeneous broadening. But you can also assume, if the whole ensemble is subject to fluctuating fields, then since you don't know how the fields exactly fluctuate after characteristic time, you no longer have a phased coherent system. Rather, phase at a later time is deterministically related to the phase at which you prepared it. And that would mean the system has dephased. And this dephasing time is called the T2 time. Nancy. AUDIENCE: I think I have two things. First, you said that it's generally true that T2 is less than T1. Is it ever true that it's not the case? PROFESSOR: Oh. There is one exception. And that's the following. Let me put it this way, every process which contributes to T1 will also contribute to T2. But there are lots of processes which only contribute to T2. So therefore, in general, T2 is much faster because many more processes can contribute to it. However, now if you ask me, is it always true? Well, there is one glitch. And this is the following. T1 is the time to damp populations. And that's the damping of psi square. T2 is due to the damping of the phase. And this is actually more a damping time of the wave function itself. And if you have a wave function psi which is damped with a damping time tau. Psi square is damped with twice the damping time. So if the only process you have is, for instance, spontaneous emission, then you find out that the damping rate for population is gamma. This is the definition of the spontaneous emission rate. But the damping rate 1 over T2 is 1/2 gamma. But because simply the way how we have defined it, one involves the square of the wave function. The other one involves simply the wave function. So there is this factor of 2 which can make-- by just a factor of 2-- T1 faster than T2. But apart from this factor of 2, if T2 would be defined in a way which would incorporate the factor of 2, then T2 would always be faster than T1. AUDIENCE: Yeah, it makes sense [INAUDIBLE]. I can't imagine if the system has a smaller T1, then it still has any coherence left in it. PROFESSOR: So maybe to be absolutely correct, I should say this. T1 is much larger than-- is larger or equal than T2 over 2. In general, we have even the situation that T1 is much, much larger than T2. But with this factor of 2, I've incorporated this subtlety of the definition. Other questions? Yes. AUDIENCE: Just a question about real motivation of using Bloch equation [INAUDIBLE]. I understand that [INAUDIBLE]. But you mentioned before that you can't describe spontaneous emission with a Hamiltonian formalism. PROFESSOR: Yes. AUDIENCE: But couldn't you use-- [INAUDIBLE]. Don't you still get spontaneous emission out of the coupling into the continuum? The emission into the different modes? You don't necessarily need [INAUDIBLE]. PROFESSOR: Yes, but let me kind of remind you of this. If you are interested in a quantum state and it decays to a level. But we're not really interested what this level is and we're not keeping track of the population here, when we can describe the time evolution of the excited state with a Hamiltonian because of the imaginary part, the Hamiltonian is no longer imaginary. And this is what Victor Weisskopf theory does. It looks at a system in the excited state and looks at the time evolution of the excited state. But if you want to include in this description what happens in the ground state, you are not having this situation. You have this situation. And what eventually will happen is you can look at a pure state which decays. And this is what is done in Victor Weisskopf theory. But if you want to know now what happens in the ground state, well, I'm speaking loosely, but that's what really happens. Every spontaneous emission adds something to the ground state, but in incoherent way. So what is being built up in the ground state is not a wave function. It's just population which has to be described with a density matrix. Or in other words, if you have a coherent superposition between excited and ground state, you cannot just say spontaneous emission is now increasing the amplitude to be in the ground state. It really does something fundamentally different. It puts population into the ground state with-- I'm loosely speaking now, but with a random phase. And this can only be described probabilistically by using the density matrix. But what you are talking about is actually, for the Victor Weisskopf theory, is pretty much this part of the diagram. We prepare an excited state, and we study it with all its glorious details, with the many modes of the electromagnetic field how the excited state decays. OK. Actually with that, we have finished one big chapter of our course, which is the general discussion of resonance, classical resonance, and our discussion of two-level systems. AUDIENCE: But [INAUDIBLE], wouldn't you have to do a sum over every single mode [INAUDIBLE]? Which would be the exact same thing you do when you do a partial trace over the environment. Isn't the end result sort of the same thing that you have to do some [INAUDIBLE] infinite sum and integral or all the [INAUDIBLE]? PROFESSOR: You need a sum, but-- AUDIENCE: That's where the decoherence comes from? PROFESSOR: Yes. But if you're interested in only the decay of an excited state, it can decay in many, many modes, but all these different modes provide a contribution to the decay rate gamma. So at the end of the day, you have a Hamiltonian evolution with a damping time gamma. And this damping time gamma is the sum of the other states. So in other words, the loss of population from the excited state, you just incorporate it by adding a damping time to the Schrodinger equation because you're not keeping track of the other modes where the population goes. You're not keeping track. You just say, excited state is lost. You're not interested whether the atoms are now in the ground state or some other state. All you are describing the loss rate from the excited state. And this is possible by simply doing-- by adding damping terms to the Schrodinger equation. In other words, what I'm saying is actually fairly simple. If you have a coherent state and you lose it, you just lose amplitude. what is left is coherent. When it's gone, it's gone. You have a smaller amplitude, smaller probability. And that's simple to describe. What is harder to describe is if you accumulate population in the ground state and the population arrives in incoherent pieces. How to treat that, this is more complicated. But simply the decay of a pure state, it's just-- you have e to the i omega t, which is a coherent evolution, and then you add an imaginary part and this is a damping time. So what I'm saying, it's sort of subtle but it's also very trivial. I don't know if this addresses your question. In the end, in general you need a density matrix. I just wanted to sort of emphasize that there is a little bit of decoherence where you can still get away with a wave function description. And actually, Victor Weisskopf theory is the wonderful example. OK, so we have discussed resonance. Arizonans have discussed in particular two-level systems. And if I wanted, we could now continue with two-level systems and talk about the wonderful things you can do with two-level systems. Absorbing photons, emitting photons, and all that. But let's put that on hold for a few weeks. And I think what we should first do is realize, where do those levels come from? And we discuss where those levels come form in our discussion of atoms. So our big next chapter is now atoms or atomic structure. And we build it up in several stages. Well, first things first. And the first things are the big chunks of energy which define the electronic structure. We discuss electronic structure for one electron and two electron atoms, hydrogen and helium. We don't go higher in the periodic table. But then we talk about other contributions to the energy of atoms, other contributions to the level structure of atoms. And this will start with fine structure, the Lamb shift. We bring in properties of the nucleus by discussing hyperfine structure. And then as a next big chapter, we will learn how external fields, magnetic fields, electric fields, and electromagnetic fields will modify the level structure of atoms. So by going through all those different layers, we will arrive at a rather complete description. If you have an atom in the laboratory, what determines its energy level and the transitions between those energy levels? So this is our agenda for the next few lectures. Today, we start with single electron atom with a hydrogen atom. And I cannot resist to start with some quotes from Dan Kleppner, who I sometimes call Mr. Hydrogen. So there is some beautiful piece of writing in a reference frame in Physics Today, "The Yin and Yang of Hydrogen." I mean, those of you who know Dan Kleppner know that he's always said hydrogen is the only atom, other atom he wants to work with. Other atoms are too complicated. And he studied-- actually, hydrogen was-- he did a little bit on alkali atoms, of course, but hydrogen was really the central part of his scientific work. Whether he studied Rydberg states in hydrogen or Bose-Einstein condensation in hydrogen. And this column in Physics Today, he talks about the yin and yang. The simplicity of hydrogen. It's the simplest atom. But if you want to work with hydrogen, you need vacuum UV because the step from the 1s to the 2p transition is-- Lyman-alpha is vacuum UV at 121 nanometer. So it's simple, but challenging. And hydrogen is the most pristine atom. But for those of you who do Bose-Einstein condensation, it's the hardest atom to Bose condense. Because the physical properties of hydrogen, it's simple in its structure. But the properties of hydrogen, in particular the collision cross-section, which is important for evaporative cooling, is very, very unfavorable. So that's why he talks about the yin and the yang of hydrogen. Let me just show you the first sentence of this paper, of this reference frame. Oops. Just a technical problem to make this fit the screen. I think I select it. What's going on? Yep. So now it's smaller. I can move it over there. Well, why don't we read it together? It's a tribute to hydrogen, a tribute to famous people. Viki Weisskopf was on the faculty at MIT. I met him, but he was already retired at this point. But then, Kleppner interacted with him. And you see the first quote, "To understand hydrogen is to understand all of physics." Well, it simply says that if you understand some of this paradigmatic systems in physics, you understand all of physics. I would actually say, well, you really have to understand the harmonic oscillator, the two-level system, and hydrogen. And maybe a little bit about three-level systems. But if you understand, really, those simple systems-- they're not so simple. But if you understand those so-called simple system very well in all its glorious detail, then you have really understood, maybe not all of physics, but a hell of a lot of physics. And this quote goes on that, "To understand hydrogen is to understand all of physics." But then Viki Weisskopf said, "Well, I wish I had understood all of hydrogen." And this is sort of Dan's Kleppner's wise words. For me, hydrogen holds an almost mystical attraction. Probably because I'm among the small band of physicists who actually confront it, more or less, daily. So that's what we are starting out now to talk about hydrogen. I know that a discussion of the hydrogen atom, the solution of the Schrodinger equation for the hydrogen atom is in all quantum mechanics textbooks. I'm not doing it here. I rather want to give you a few insightful comments about the structure of hydrogen, some scaling of length scales and energy levels, because this is something we need later in the course. So in other words, I want to highlight a few things which are often not emphasized in the textbook. So let's talk about the hydrogen atom. So the energy levels of the hydrogen atom are described by the Rydberg formula. This actually follows already from the simple Bohr model. But of course, also from the Schrodinger equation. And it says that the energy levels-- let me write it in the following way. It depends on the electron mass, the electron charge, h bar square. It has a reduced mass correction. And then, n is the principal quantum number. It scales as 1 over n squared. So this here is the reduced mass factor. This here is called the Rydberg constant R, sometimes with the index infinity because it is the Rydberg constant which describes the spectrum of a hydrogen atom where the nucleus has infinite mass. If you include the reduced mass correction for the mass of the proton, then this factor which determines the spectrum of hydrogen is called the Rydberg constant with an index H for hydrogen. You find the electronic eigenfunctions as the solution of Schrodinger's equation. And the eigenfunctions have a simple angular part, which are the spherical harmonics. We are not talking about that. But there is a radial part, radial wave function. So if you solve it, if you find those wave functions, there are a number of noteworthy results. One is in short form the spectrum is the Rydberg constant divided by n squared. I want to talk to you about your intuition for the size of the hydrogen atom, or for the size of hydrogen-like atoms. So what I want to discuss is several important aspects about the radius or the expectation value of the position of the electron. And it's important to distinguish between the expectation value for the radius and the inverse radius. The expectation value for the radius is, well, a little bit more complicated. 1/2 1 minus l times l plus 1 over n squared. Whereas, the result for the inverse radius is very simple. What I've introduced here is the natural length scale for the hydrogen atom which is the Bohr radius. And just to be general, mu is the reduced mass. So it's close to the electron mass. Well, the one thing I want to discuss with you-- we will need leader for the discussion of quantum defects, for field ionization and other processes, we have to know what the size of the wave function is. And so usually, if you wave your hands, you would say the expectation value of 1/r is 1 over the expectation value or r. But there are now some important differences. I first want to sort of ask you, why is the expectation value of 1/r, why does it have this very, very simple form? AUDIENCE: Virial theorem? PROFESSOR: The Virial theorem. Yes. We know that there is a fairly simple form for the energy eigenvalues. It's 1 over n squared. Well, Coulomb energy e square over r. So if the only energy of the hydrogen atom were Coulomb energy, it's very clear that 1/r, which is proportional to the Coulomb energy, has to have the same simple form as the energy eigenvalue. Well, there is a second contribution to the energy in addition to Coulomb energy. This is kinetic energy. But due to the Virial theorem, the kinetic energy is actually proportional to the Coulomb energy. And therefore, the total energy is proportional to 1/r. And therefore, 1/r has to scale exactly as the energy. Since the energy until we introduce fine structure is independent of l, only depends on the principal quantum number n, we find there's only an n-dependence. But if you would ask, what is the expectation value for the radius? You find an l-dependence because you're talking about a very different quantity. So let me just summarize what we just discussed. We have the Virial theorem, which in general is of the following form. If you have potential energy which is proportional to radius to the n, then the expectation value for the kinetic energy is n/2 times the expectation value for the potential energy. The most famous example is n equals 2, the harmonic oscillator. You have an equal contribution to potential energy of the spring and kinetic energy. Well, here for the Coulomb problem, we discuss n equals minus 1. And therefore, the kinetic energy is minus 1/2 times the potential energy. So this factor of 2 appears now in a number of relations and that's as follows. If you take the Rydberg constant, the Rydberg constant in CGS units is-- well, that's the Coulomb energy at the Bohr radius. But the Rydberg constant is 1/2 of it. So the Rydberg constant is 1/2 of another quantity, which is called 1 Hartree. We'll talk, probably not , today, but on Monday about atomic units, about sort of fundamental system of units. And the fundamental way-- the fundamental energy of the hydrogen atom, the fundamental unit of energy is whatever energy you can construct using the electron mass, the electron charge, and h bar. And what you get is 1 Hartree. If you ever wondered why the Rydberg is 1/2 Hartree, what happens is in the ground state of hydrogen, you have 1 Hartree worth of Coulomb energy. But then because of the Virial theorem, you have minus 1/2 of it as kinetic energy. And therefore, the binding energy in the n equals 1 ground state, which is 1 Rydberg, is 1/2 of the Hartree. So this factor of 1/2 of the Virial theorem is responsible for this factor of 2 for those two energies. I usually prefer SI units for all calculations, but there's certain relations where we should use CGS units. Just as a side remark, if you want to go to SI units, you simply replace the electron charge e squared by e squared divided by 4 pi epsilon0. OK. So I've discussed the hydrogen atom. It's also insightful and you should actually remember that or be able to re-derive it for yourself. How do things depend on the nuclear charge z? Well, if you have a nuclear charge z, the Coulomb energy goes up by-- well, if you have a stronger attraction. If you would go to helium nucleus or even more highly-charged nucleus and put one electron in it. Because of the stronger Coulomb attraction, all the length scales are divided by z. So everything is smaller by a factor of z. So what does that now imply for the energy? Well, you have a Coulomb field which is z times stronger, but you probe it now at a z times smaller radius. So therefore, the energies scale with z squared. Let me formulate a question because we need that later on. So if you have a hydrogen-like atom and the electron is in a state with principal quantum number n. And let's assume there is no angular momentum. So what I'm writing down for you is the probability for the electron to be at the nucleus. This will be very important later on when ewe discuss hyperfine structure because hyperfine is responsible-- for hyperfine structure, what is responsible is the fact that the electron can overlap with the nucleus. So this factor will appear in our discussion of hyperfine structure. And what I want to ask you is, how does this quantity depend on the principal quantum number n and on z? And I want to give you four choices. Of course, for dimensional reasons, everything is 1 over the Bohr radius cubed because it's a density. But you cannot use dimensional analysis to guess, how do things scale with z and with n? So here are your four choices. Does it scale with z, z squared, z cubed? Does it scale with n squared, n cubed, n to the 6? If you don't know it, just make your best guess. OK, one part should be relatively straightforward. And this is the scaling with z. Let me just stop it. So the exact answer is that z n 00 at the origin squared is pi a0 cubed. And its c cubed over n cubed. So the correct answer is this one. Let me first say-- OK, I gave you four choices and it's difficult to distinguish all of them. But the first one you should have gotten rather simply, and this is the z-scaling. Because the scaling with z is the following, that everything-- if you write down the Schrodinger equation, if you have z, you replace e squared by z e squared. And I actually just mentioned it five minutes ago, that all length scales, the Bohr radius is h bar squared over electron mass times e squared. It actually scales with 1/z. So if all length scales go as 1/z, the density goes with z cubed. So therefore, one should have immediately narrowed down the choice. It should be A or C because they have the correct scaling with z. The scaling with n is more subtle and there was something surprising I learned about it. And this is what I want to present to you in the last three or four minutes. So the z-scaling, just remember that the length scaling is the length scales as 1/z. Therefore, density scale was z cubed. The interesting thing about the length scaling is-- and I just want to draw your attention to it because it can be confusing, that in high torsion we have not only one length scale, but two length scales. We have mentioned one of it already, which is the energetic length scale 1/r. 1/r is the Coulomb energy. Because of the Virial theorem, it's proportional to the total energy. And that's what you know, what you remember when you wake up in the middle of the night out of deep sleep, that the energy of high torsion is 1 over n squared. So therefore, this is a0 over n squared. However, if you look at the wave function of hydrogen, you factor out. When you solve the radial equation, you factor out an exponential. There's sort of polynomial and then there is an exponential decay. And the characteristic lengths in the exponential decay of the wave function is n e0 over z. So therefore, when we talk about wave functions with principal quantum number n, there are two length scale. 1 over r n l scales with n squared. But the characteristic length scale in the exponential part of the radial wave function scales with n and not with n squared. And it is this exponential part of the wave function which scales with n which is responsible for the probability to find the electron as the nucleus. Which, as I said before, the z-scaling is simple but the n-scaling is not n to the 6. It's n cubed. And this is really important. And this describes the scaling with n for everything which depends on the presence of the electron as a nucleus. One is the quantum defect and the other one is the hyperfine structure. Let me just give you one more scaling. I've discussed now what happens for 0 angular momentum, for finite angular momentum states, psi is proportional to r to the l. So therefore, if you ask, what is psi square, it scales with 2l. And at least for large n, the n-scaling is, again, 1 over n cubed. OK, that's what I wanted to present you today. Any questions? OK, so we meet again on Wednesday next week.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_6_Training_Neural_Networks_I.txt
- Okay, let's get started. Okay, so today we're going to get into some of the details about how we train neural networks. So, some administrative details first. Assignment 1 is due today, Thursday, so 11:59 p.m. tonight on Canvas. We're also going to be releasing Assignment 2 today, and then your project proposals are due Tuesday, April 25th. So you should be really starting to think about your projects now if you haven't already. How many people have decided what they want to do for their project so far? Okay, so some, some people, so yeah, everyone else, you can go to TA office hours if you want suggestions and bounce ideas off of TAs. We also have a list of projects that other people have proposed. Some people usually affiliated with Stanford, so on Piazza, so you can take a look at those for additional ideas. And we also have some notes on backprop for a linear layer and a vector and tensor derivatives that Justin's written up, so that should help with understanding how exactly backprop works and for vectors and matrices. So these are linked to lecture four on the syllabus and you can go and take a look at those. Okay, so where we are now. We've talked about how to express a function in terms of a computational graph, that we can represent any function in terms of a computational graph. And we've talked more explicitly about neural networks, which is a type of graph where we have these linear layers that we stack on top of each other with nonlinearities in between. And we've also talked last lecture about convolutional neural networks, which are a particular type of network that uses convolutional layers to preserve the spatial structure throughout all the the hierarchy of the network. And so we saw exactly how a convolution layer looked, where each activation map in the convolutional layer output is produced by sliding a filter of weights over all of the spatial locations in the input. And we also saw that usually we can have many filters per layer, each of which produces a separate activation map. And so what we can get is from an input right, with a certain depth, we'll get an activation map output, which has some spatial dimension that's preserved, as well as the depth is the total number of filters that we have in that layer. And so what we want to do is we want to learn the values of all of these weights or parameters, and we saw that we can learn our network parameters through optimization, which we talked about little bit earlier in the course, right? And so we want to get to a point in the loss landscape that produces a low loss, and we can do this by taking steps in the direction of the negative gradient. And so the whole process we actually call a Mini-batch Stochastic Gradient Descent where the steps are that we continuously, we sample a batch of data. We forward prop it through our computational graph or our neural network. We get the loss at the end. We backprop through our network to calculate the gradients. And then we update the parameters or the weights in our network using this gradient. Okay, so now for the next couple of lectures we're going to talk about some of the details involved in training neural networks. And so this involves things like how do we set up our neural network at the beginning, which activation functions that we choose, how do we preprocess the data, weight initialization, regularization, gradient checking. We'll also talk about training dynamics. So, how do we babysit the learning process? How do we choose how we do parameter updates, specific perimeter update rules, and how do we do hyperparameter optimization to choose the best hyperparameters? And then we'll also talk about evaluation and model ensembles. So today in the first part, I will talk about activation functions, data preprocessing, weight initialization, batch normalization, babysitting the learning process, and hyperparameter optimization. Okay, so first activation functions. So, we saw earlier how out of any particular layer, we have the data coming in. We multiply by our weight in you know, fully connected or a convolutional layer. And then we'll pass this through an activation function or nonlinearity. And we saw some examples of this. We used sigmoid previously in some of our examples. We also saw the ReLU nonlinearity. And so today we'll talk more about different choices for these different nonlinearities and trade-offs between them. So first, the sigmoid, which we've seen before, and probably the one we're most comfortable with, right? So the sigmoid function is as we have up here, one over one plus e to the negative x. And what this does is it takes each number that's input into the sigmoid nonlinearity, so each element, and the elementwise squashes these into this range [0,1] right, using this function here. And so, if you get very high values as input, then output is going to be something near one. If you get very low values, or, I'm sorry, very negative values, it's going to be near zero. And then we have this regime near zero that it's in a linear regime. It looks a bit like a linear function. And so this is been historically popular, because sigmoids, in a sense, you can interpret them as a kind of a saturating firing rate of a neuron, right? So if it's something between zero and one, you could think of it as a firing rate. And we'll talk later about other nonlinearities, like ReLUs that, in practice, actually turned out to be more biologically plausible, but this does have a kind of interpretation that you could make. So if we look at this nonlinearity more carefully, there's several problems that there actually are with this. So the first is that saturated neurons can kill off the gradient. And so what exactly does this mean? So if we look at a sigmoid gate right, a node in our computational graph, and we have our data X as input into it, and then we have the output of the sigmoid gate coming out of it, what does the gradient flow look like as we're coming back? We have dL over d sigma right? The upstream gradient coming down, and then we're going to multiply this by dSigma over dX. This will be the gradient of a local sigmoid function. And we're going to chain these together for our downstream gradient that we pass back. So who can tell me what happens when X is equal to -10? It's very negative. What does is gradient look like? Zero, yeah, so that's right. So the gradient become zero and that's because in this negative, very negative region of the sigmoid, it's essentially flat, so the gradient is zero, and we chain any upstream gradient coming down. We multiply by basically something near zero, and we're going to get a very small gradient that's flowing back downwards, right? So, in a sense, after the chain rule, this kills the gradient flow and you're going to have a zero gradient passed down to downstream nodes. And so what happens when X is equal to zero? So there it's, yeah, it's fine in this regime. So, in this regime near zero, you're going to get a reasonable gradient here, and then it'll be fine for backprop. And then what about X equals 10? Zero, right. So again, so when X is equal to a very negative or X is equal to large positive numbers, then these are all regions where the sigmoid function is flat, and it's going to kill off the gradient and you're not going to get a gradient flow coming back. Okay, so a second problem is that the sigmoid outputs are not zero centered. And so let's take a look at why this is a problem. So, consider what happens when the input to a neuron is always positive. So in this case, all of our Xs we're going to say is positive. It's going to be multiplied by some weight, W, and then we're going to run it through our activation function. So what can we say about the gradients on W? So think about what the local gradient is going to be, right, for this linear layer. We have DL over whatever the activation function, the loss coming down, and then we have our local gradient, which is going to be basically X, right? And so what does this mean, if all of X is positive? Okay, so I heard it's always going to be positive. So that's almost right. It's always going to be either positive, or all positive or all negative, right? So, our upstream gradient coming down is DL over our loss. L is going to be DL over DF. and this is going to be either positive or negative. It's some arbitrary gradient coming down. And then our local gradient that we multiply this by is, if we're going to find the gradients on W, is going to be DF over DW, which is going to be X. And if X is always positive then the gradients on W, which is multiplying these two together, are going to always be the sign of the upstream gradient coming down. And so what this means is that all the gradients of W, since they're always either positive or negative, they're always going to move in the same direction. You're either going to increase all of the, when you do a parameter update, you're going to either increase all of the values of W by a positive amount, or differing positive amounts, or you will decrease them all. And so the problem with this is that, this gives very inefficient gradient updates. So, if you look at on the right here, we have an example of a case where, let's say W is two-dimensional, so we have our two axes for W, and if we say that we can only have all positive or all negative updates, then we have these two quadrants, and, are the two places where the axis are either all positive or negative, and these are the only directions in which we're allowed to make a gradient update. And so in the case where, let's say our hypothetical optimal W is actually this blue vector here, right, and we're starting off at you know some point, or at the top of the the the beginning of the red arrows, we can't just directly take a gradient update in this direction, because this is not in one of those two allowed gradient directions. And so what we're going to have to do, is we'll have to take a sequence of gradient updates. For example, in these red arrow directions that are each in allowed directions, in order to finally get to this optimal W. And so this is why also, in general, we want a zero mean data. So, we want our input X to be zero meaned, so that we actually have positive and negative values and we don't get into this problem of the gradient updates. They'll be all moving in the same direction. So is this clear? Any questions on this point? Okay. Okay, so we've talked about these two main problems of the sigmoid. The saturated neurons can kill the gradients if we're too positive or too negative of an input. They're also not zero-centered and so we get these, this inefficient kind of gradient update. And then a third problem, we have an exponential function in here, so this is a little bit computationally expensive. In the grand scheme of your network, this is usually not the main problem, because we have all these convolutions and dot products that are a lot more expensive, but this is just a minor point also to observe. So now we can look at a second activation function here at tanh. And so this looks very similar to the sigmoid, but the difference is that now it's squashing to the range [-1, 1]. So here, the main difference is that it's now zero-centered, so we've gotten rid of the second problem that we had. It still kills the gradients, however, when it's saturated. So, you still have these regimes where the gradient is essentially flat and you're going to kill the gradient flow. So this is a bit better than the sigmoid, but it still has some problems. Okay, so now let's look at the ReLU activation function. And this is one that we saw in our examples last lecture when we were talking about the convolutional neural network. And we saw that we interspersed ReLU nonlinearities between many of the convolutional layers. And so, this function is f of x equals max of zero and x. So it takes an elementwise operation on your input and basically if your input is negative, it's going to put it to zero. And then if it's positive, it's going to be just passed through. It's the identity. And so this is one that's pretty commonly used, and if we look at this one and look at and think about the problems that we saw earlier with the sigmoid and the tanh, we can see that it doesn't saturate in the positive region. So there's whole half of our input space where it's not going to saturate, so this is a big advantage. So this is also computationally very efficient. We saw earlier that the sigmoid has this E exponential in it. And so the ReLU is just this simple max and there's, it's extremely fast. And in practice, using this ReLU, it converges much faster than the sigmoid and the tanh, so about six times faster. And it's also turned out to be more biologically plausible than the sigmoid. So if you look at a neuron and you look at what the inputs look like, and you look at what the outputs look like, and you try to measure this in neuroscience experiments, you'll see that this one is actually a closer approximation to what's happening than sigmoids. And so ReLUs were starting to be used a lot around 2012 when we had AlexNet, the first major convolutional neural network that was able to do well on ImageNet and large-scale data. They used the ReLU in their experiments. So a problem however, with the ReLU, is that it's still, it's not not zero-centered anymore. So we saw that the sigmoid was not zero-centered. Tanh fixed this and now ReLU has this problem again. And so that's one of the issues of the ReLU. And then we also have this further annoyance of, again we saw that in the positive half of the inputs, we don't have saturation, but this is not the case of the negative half. Right, so just thinking about this a little bit more precisely. So what's happening here when X equals negative 10? So zero gradient, that's right. What happens when X is equal to positive 10? It's good, right. So, we're in the linear regime. And then what happens when X is equal to zero? Yes, it undefined here, but in practice, we'll say, you know, zero, right. And so basically, it's killing the gradient in half of the regime. And so we can get this phenomenon of basically dead ReLUs, when we're in this bad part of the regime. And so there's, you can look at this in, as coming from several potential reasons. And so if we look at our data cloud here, this is all of our training data, then if we look at where the ReLUs can fall, so the ReLUs can be, each of these is basically the half of the plane where it's going to activate. And so each of these is the plane that defines each of these ReLUs, and we can see that you can have these dead ReLUs that are basically off of the data cloud. And in this case, it will never activate and never update, as compared to an active ReLU where some of the data is going to be positive and passed through and some won't be. And so there's several reasons for this. The first is that it can happen when you have bad initialization. So if you have weights that happen to be unlucky and they happen to be off the data cloud, so they happen to specify this bad ReLU over here. Then they're never going to get a data input that causes it to activate, and so they're never going to get good gradient flow coming back. And so it'll just never update and never activate. What's the more common case is when your learning rate is too high. And so this case you started off with an okay ReLU, but because you're making these huge updates, the weights jump around and then your ReLU unit in a sense, gets knocked off of the data manifold. And so this happens through training. So it was fine at the beginning and then at some point, it became bad and it died. And so if in practice, if you freeze a network that you've trained and you pass the data through, you can see it actually is much as 10 to 20% of the network is these dead ReLUs. And so you know that's a problem, but also most networks do have this type of problem when you use ReLUs. Some of them will be dead, and in practice, people look into this, and it's a research problem, but it's still doing okay for training networks. Yeah, is there a question? [student speaking off mic] Right. So the question is, yeah, so the data cloud is just your training data. [student speaking off mic] Okay, so the question is when, how do you tell when the ReLU is going to be dead or not, with respect to the data cloud? And so if you look at, this is an example of like a simple two-dimensional case. And so our ReLU, we're going to get our input to the ReLU, which is going to be a basically you know, W1 X1 plus W2 X2, and it we apply this, so that that defines this this separating hyperplane here, and then we're going to take half of it that's going to be positive, and half of it's going to be killed off, and so yes, so you, you know you just, it's whatever the weights happened to be, and where the data happens to be is where these, where these hyperplanes fall, and so, so yeah so just throughout the course of training, some of your ReLUs will be in different places, with respect to the data cloud. Oh, question. [student speaking off mic] Yeah. So okay, so the question is for the sigmoid we talked about two drawbacks, and one of them was that the neurons can get saturated, so let's go back to the sigmoid here, and the question was this is not the case, when all of your inputs are positive. So when all of your inputs are positive, they're all going to be coming in in this zero plus region here, and so you can still get a saturating neuron, because you see up in this positive region, it also plateaus at one, and so when it's when you have large positive values as input you're also going to get the zero gradient, because you have you have a flat slope here. Okay. Okay, so in practice people also like to initialize ReLUs with slightly positive biases, in order to increase the likelihood of it being active at initialization and to get some updates. Right and so this basically just biases towards more ReLUs firing at the beginning, and in practice some say that it helps. Some say that it doesn't. Generally people don't always use this. It's yeah, a lot of times people just initialize it with zero biases still. Okay, so now we can look at some modifications on the ReLU that have come out since then, and so one example is this leaky ReLU. And so this looks very similar to the original ReLU, and the only difference is that now instead of being flat in the negative regime, we're going to give a slight negative slope here And so this solves a lot of the problems that we mentioned earlier. Right here we don't have any saturating regime, even in the negative space. It's still very computationally efficient. It still converges faster than sigmoid and tanh, very similar to a ReLU. And it doesn't have this dying problem. And there's also another example is the parametric rectifier, so PReLU. And so in this case it's just like a leaky ReLU where we again have this sloped region in the negative space, but now this slope in the negative regime is determined through this alpha parameter, so we don't specify, we don't hard-code it. but we treat it as now a parameter that we can backprop into and learn. And so this gives it a little bit more flexibility. And we also have something called an Exponential Linear Unit, an ELU, so we have all these different LUs, basically. and this one again, you know, it has all the benefits of the ReLu, but now you're, it is also closer to zero mean outputs. So, that's actually an advantage that the leaky ReLU, parametric ReLU, a lot of these they allow you to have your mean closer to zero, but compared with the leaky ReLU, instead of it being sloped in the negative regime, here you actually are building back in a negative saturation regime, and there's arguments that basically this allows you to have some more robustness to noise, and you basically get these deactivation states that can be more robust. And you can look at this paper for, there's a lot of kind of more justification for why this is the case. And in a sense this is kind of something in between the ReLUs and the leaky ReLUs, where has some of this shape, which the Leaky ReLU does, which gives it closer to zero mean output, but then it also still has some of this more saturating behavior that ReLUs have. A question? [student speaking off mic] So, whether this parameter alpha is going to be specific for each neuron. So, I believe it is often specified, but I actually can't remember exactly, so you can look in the paper for exactly, yeah, how this is defined, but yeah, so I believe this function is basically very carefully designed in order to have nice desirable properties. Okay, so there's basically all of these kinds of variants on the ReLU. And so you can see that, all of these it's kind of, you can argue that each one may have certain benefits, certain drawbacks in practice. People just want to run experiments all of them, and see empirically what works better, try and justify it, and come up with new ones, but they're all different things that are being experimented with. And so let's just mention one more. This is Maxout Neuron. So, this one looks a little bit different in that it doesn't have the same form as the others did of taking your basic dot product, and then putting this element-wise nonlinearity in front of it. Instead, it looks like this, this max of W dot product of X plus B, and a second set of weights, W2 dot product with X plus B2. And so what does this, is this is taking the max of these two functions in a sense. And so what it does is it generalizes the ReLU and the leaky ReLu, because you're just you're taking the max over these two, two linear functions. And so what this give us, it's again you're operating in a linear regime. It doesn't saturate and it doesn't die. The problem is that here, you are doubling the number of parameters per neuron. So, each neuron now has this original set of weights, W, but it now has W1 and W2, so you have twice these. So in practice, when we look at all of these activation functions, kind of a good general rule of thumb is use ReLU. This is the most standard one that generally just works well. And you know you do want to be careful in general with your learning rates to adjust them based, see how things do. We'll talk more about adjusting learning rates later in this lecture, but you can also try out some of these fancier activation functions, the leaky ReLU, Maxout, ELU, but these are generally, they're still kind of more experimental. So, you can see how they work for your problem. You can also try out tanh, but probably some of these ReLU and ReLU variants are going to be better. And in general don't use sigmoid. This is one of the earliest original activation functions, and ReLU and these other variants have generally worked better since then. Okay, so now let's talk a little bit about data preprocessing. Right, so the activation function, we design this is part of our network. Now we want to train the network, and we have our input data that we want to start training from. So, generally we want to always preprocess the data, and this is something that you've probably seen before in machine learning classes if you taken those. And some standard types of preprocessing are, you take your original data and you want to zero mean them, and then you probably want to also normalize that, so normalized by the standard deviation, And so why do we want to do this? For zero centering, you can remember earlier that we talked about when all the inputs are positive, for example, then we get all of our gradients on the weights to be positive, and we get this basically suboptimal optimization. And in general even if it's not all zero or all negative, any sort of bias will still cause this type of problem. And so then in terms of normalizing the data, this is basically you want to normalize data typically in the machine learning problems, so that all features are in the same range, and so that they contribute equally. In practice, since for images, which is what we're dealing with in this course here for the most part, we do do the zero centering, but in practice we don't actually normalize the pixel value so much, because generally for images right at each location you already have relatively comparable scale and distribution, and so we don't really need to normalize so much, compared to more general machine learning problems, where you might have different features that are very different and of very different scales. And in machine learning, you might also see a more complicated things, like PCA or whitening, but again with images, we typically just stick with the zero mean, and we don't do the normalization, and we also don't do some of these more complicated pre-processing. And one reason for this is generally with images we don't really want to take all of our input, let's say pixel values and project this onto a lower dimensional space of new kinds of features that we're dealing with. We typically just want to apply convolutional networks spatially and have our spatial structure over the original image. Yeah, question. [student speaking off mic] So the question is we do this pre-processing in a training phase, do we also do the same kind of thing in the test phase, and the answer is yes. So, let me just move to the next slide here. So, in general on the training phase is where we determine our let's say, mean, and then we apply this exact same mean to the test data. So, we'll normalize by the same empirical mean from the training data. Okay, so to summarize basically for images, we typically just do the zero mean pre-processing and we can subtract either the entire mean image. So, from the training data, you compute the mean image, which will be the same size as your, as each image. So, for example 32 by 32 by three, you'll get this array of numbers, and then you subtract that from each image that you're about to pass through the network, and you'll do the same thing at test time for this array that you determined at training time. In practice, we can also for some networks, we also do this by just of subtracting a per-channel mean, and so instead of having an entire mean image that were going to zero-center by, we just take the mean by channel, and this is just because it turns out that it was similar enough across the whole image, it didn't make such a big difference to subtract the mean image versus just a per-channel value. And this is easier to just pass around and deal with. So, you'll see this as well for example, in a VGG Network, which is a network that came after AlexNet, and we'll talk about that later. Question. [student speaking off mic] Okay, so there are two questions. The first is what's a channel, in this case, when we are subtracting a per-channel mean? And this is RGB, so our array, our images are typically for example, 32 by 32 by three. So, width, height, each are 32, and our depth, we have three channels RGB, and so we'll have one mean for the red channel, one mean for a green, one for blue. And then the second, what was your second question? [student speaking off mic] Oh. Okay, so the question is when we're subtracting the mean image, what is the mean taken over? And the mean is taking over all of your training images. So, you'll take all of your training images and just compute the mean of all of those. Does that make sense? [student speaking off mic] Yeah the question is, we do this for the entire training set, once before we start training. We don't do this per batch, and yeah, that's exactly correct. So we just want to have a good sample, an empirical mean that we have. And so if you take it per batch, if you're sampling reasonable batches, it should be basically, you should be getting the same values anyways for the mean, and so it's more efficient and easier just do this once at the beginning. You might not even have to really take it over the entire training data. You could also just sample enough training images to get a good estimate of your mean. Okay, so any other questions about data preprocessing? Yes. [student speaking off mic] So, the question is does the data preprocessing solve the sigmoid problem? So the data preprocessing is doing zero mean right? And we talked about how sigmoid, we want to have zero mean. And so it does solve this for the first layer that we pass it through. So, now our inputs to the first layer of our network is going to be zero mean, but we'll see later on that we're actually going to have this problem come up in much worse and greater form, as we have deep networks. You're going to get a lot of nonzero mean problems later on. And so in this case, this is not going to be sufficient. So this only helps at the first layer of your network. Okay, so now let's talk about how do we want to initialize the weights of our network? So, we have let's say our standard two layer neural network and we have all of these weights that we want to learn, but we have to start them with some value, right? And then we're going to update them using our gradient updates from there. So first question. What happens when we use an initialization of W equals zero? We just set all of the parameters to be zero. What's the problem with this? [student speaking off mic] So sorry, say that again. So I heard all the neurons are going to be dead. No updates ever. So not exactly. So, part of that is correct in that all the neurons will do the same thing. So, they might not all be dead. Depending on your input value, I mean, you could be in any regime of your neurons, so they might not be dead, but the key thing is that they will all do the same thing. So, since your weights are zero, given an input, every neuron is going to be, have the same operation basically on top of your inputs. And so, since they're all going to output the same thing, they're also all going to get the same gradient. And so, because of that, they're all going to update in the same way. And now you're just going to get all neurons that are exactly the same, which is not what you want. You want the neurons to learn different things. And so, that's the problem when you initialize everything equally and there's basically no symmetry breaking here. So, what's the first, yeah question? [student speaking off mic] So the question is, because that, because the gradient also depends on our loss, won't one backprop differently compared to the other? So in the last layer, like yes, you do have basically some of this, the gradients will get the same, sorry, will get different loss for each specific neuron based on which class it was connected to, but if you look at all the neurons generally throughout your network, like you're going to, you basically have a lot of these neurons that are connected in exactly the same way. They had the same updates and it's basically going to be the problem. Okay, so the first idea that we can have to try and improve upon this is to set all of the weights to be small random numbers that we can sample from a distribution. So, in this case, we're going to sample from basically a standard gaussian, but we're going to scale it so that the standard deviation is actually one E negative two, 0.01. And so, just give this many small random weights. And so, this does work okay for small networks, now we've broken the symmetry, but there's going to be problems with deeper networks. And so, let's take a look at why this is the case. So, here this is basically an experiment that we can do where let's take a deeper network. So in this case, let's initialize a 10 layer neural network to have 500 neurons in each of these 10 layers. Okay, we'll use tanh nonlinearities in this case and we'll initialize it with small random numbers as we described in the last slide. So here, we're going to basically just initialize this network. We have random data that we're going to take, and now let's just pass it through the entire network, and at each layer, look at the statistics of the activations that come out of that layer. And so, what we'll see this is probably a little bit hard to read up top, but if we compute the mean and the standard deviations at each layer, well see that at the first layer this is, the means are always around zero. There's a funny sound in here. Interesting, okay well that was fixed. So, if we look at, if we look at the outputs from here, the mean is always going to be around zero, which makes sense. So, if we look here, let's see, if we take this, we looked at the dot product of X with W, and then we took the tanh on linearity, and then we store these values and so, because it tanh is centered around zero, this will make sense, and then the standard deviation however shrinks, and it quickly collapses to zero. So, if we're plotting this, here this second row of plots here is showing the mean and standard deviations over time per layer and then in the bottom, the sequence of plots is showing for each of our layers. What's the distribution of the activations that we have? And so, we can see that at the first layer, we still have a reasonable gaussian looking thing. It's a nice distribution. But the problem is that as we multiply by this W, these small numbers at each layer, this quickly shrinks and collapses all of these values, as we multiply this over and over again. And so, by the end, we get all of these zeros, which is not what we want. So we get all the activations become zero. And so now let's think about the backwards pass. So, if we do a backward pass, now assuming this was our forward pass and now we want to compute our gradients. So first, what does the gradients look like on the weights? Does anyone have a guess? So, if we think about this, we have our input values are very small at each layer right, because they've all collapsed at this near zero, and then now each layer, we have our upstream gradient flowing down, and then in order to get the gradient on the weights remember it's our upstream gradient times our local gradient, which for this this dot product were doing W times X. It's just basically going to be X, which is our inputs. So, it's again a similar kind of problem that we saw earlier, where now since, so here because X is small, our weights are getting a very small gradient, and they're basically not updating. So, this is a way that you can basically try and think about the effect of gradient flows through your networks. You can always think about what the forward pass is doing, and then think about what's happening as you have gradient flows coming down, and different types of inputs, what the effect of this actually is on our weights and the gradients on them. And so also, if now if we think about what's the gradient that's going to be flowing back from each layer as we're chaining all these gradients. Alright, so this is going to be the flip thing where we have now the gradient flowing back is our upstream gradient times in this case the local gradient is W on our input X. And so again, because this is the dot product, and so now, actually going backwards at each layer, we're basically doing a multiplication of the upstream gradient by our weights in order to get the next gradient flowing downwards. And so because here, we're multiplying by W over and over again. You're getting basically the same phenomenon as we had in the forward pass where everything is getting smaller and smaller. And now the gradient, upstream gradients are collapsing to zero as well. Question? [student speaking off mic] Yes, I guess upstream and downstream is, can be interpreted differently, depending on if you're going forward and backward, but in this case we're going, we're doing, we're going backwards, right? We're doing back propagation. And so upstream is the gradient flowing, you can think of a flow from your loss, all the way back to your input. And so upstream is what came from what you've already done, flowing you know, down into your current node. Right, so we're for flowing downwards, and what we get coming into the node through backprop is coming from upstream. Okay, so now let's think about what happens when, you know we saw that this was a problem when our weights were pretty small, right? So, we can think about well, what if we just try and solve this by making our weights big? So, let's sample from this standard gaussian, now with standard deviation one instead of 0.01. So what's the problem here? Does anyone have a guess? If our weights are now all big, and we're passing them, and we're taking these outputs of W times X, and passing them through tanh nonlinearities, remember we were talking about what happens at different values of inputs to tanh, so what's the problem? Okay, so yeah I heard that it's going to be saturated, so that's right. Basically now, because our weights are going to be big, we're going to always be at saturated regimes of either very negative or very positive of the tanh. And so in practice, what you're going to get here is now if we look at the distribution of the activations at each of the layers here on the bottom, they're going to be all basically negative one or plus one. Right, and so this will have the problem that we talked about with the tanh earlier, when they're saturated, that all the gradients will be zero, and our weights are not updating. So basically, it's really hard to get your weight initialization right. When it's too small they all collapse. When it's too large they saturate. So, there's been some work in trying to figure out well, what's the proper way to initialize these weights. And so, one kind of good rule of thumb that you can use is the Xavier initialization. And so this is from this paper by Glorot in 2010. And so what this formula is, is if we look at W up here, we can see that we want to initialize them to these, we sample from our standard gaussian, and then we're going to scale by the number of inputs that we have. And you can go through the math, and you can see in the lecture notes as well as in this paper of exactly how this works out, but basically the way we do it is we specify that we want the variance of the input to be the same as a variance of the output, and then if you derive what the weight should be you'll get this formula, and intuitively with this kind of means is that if you have a small number of inputs right, then we're going to divide by the smaller number and get larger weights, and we need larger weights, because with small inputs, and you're multiplying each of these by weight, you need a larger weights to get the same larger variance at output, and kind of vice versa for if we have many inputs, then we want smaller weights in order to get the same spread at the output. So, you can look at the notes for more details about this. And so basically now, if we want to have a unit gaussian, right as input to each layer, we can use this kind of initialization to at training time, to be able to initialize this, so that there is approximately a unit gaussian at each layer. Okay, and so one thing is does assume though is that it is assumed that there's linear activations. and so it assumes that we are in the activation, in the active region of the tanh, for example. And so again, you can look at the notes to really try and understand its derivation, but the problem is that this breaks when now you use something like a ReLU. Right, and so with the ReLU what happens is that, because it's killing half of your units, it's setting approximately half of them to zero at each time, it's actually halving the variance that you get out of this. And so now, if you just make the same assumptions as your derivation earlier you won't actually get the right variance coming out, it's going to be too small. And so what you see is again this kind of phenomenon, as the distributions starts collapsing. In this case you get more and more peaked toward zero, and more units deactivated. And the way to address this with something that has been pointed out in some papers, which is that you can you can try to account for this with an extra, divided by two. So, now you're basically adjusting for the fact that half the neurons get killed. And so you're kind of equivalent input has actually half this number of input, and so you just add this divided by two factor in, this works much better, and you can see that the distributions are pretty good throughout all layers of the network. And so in practice this is been really important actually, for training these types of little things, to a really pay attention to how your weights are, make a big difference. And so for example, you'll see in some papers that this actually is the difference between the network even training at all and performing well versus nothing happening. So, proper initialization is still an active area of research. And so if you're interested in this, you can look at a lot of these papers and resources. A good general rule of thumb is basically use the Xavier Initialization to start with, and then you can also think about some of these other kinds of methods. And so now we're going to talk about a related idea to this, so this idea of wanting to keep activations in a gaussian range that we want. Right, and so this idea behind what we're going to call batch normalization is, okay we want unit gaussian activations. Let's just make them that way. Let's just force them to be that way. And so how does this work? So, let's consider a batch of activations at some layer. And so now we have all of our activations coming out. If we want to make this unit gaussian, we actually can just do this empirically, right. We can take the mean of the batch that we have so far of the current batch, and we can just and the variance, and we can just normalize by this. Right, and so basically, instead of with weight initialization, we're setting this at the start of training so that we try and get it into a good spot that we can have unit gaussians at every layer, and hopefully during training this will preserve this. Now we're going to explicitly make that happen on every forward pass through the network. We're going to make this happen functionally, and basically by normalizing by the mean and the variance of each neuron, we look at all of the inputs coming into it and calculate the mean and variance for that batch and normalize it by it. And the thing is that this is a, this is just a differentiable function right? If we have our mean and our variance as constants, this is just a sequence of computational operations that we can differentiate and do back prop through this. Okay, so just as I was saying earlier right, if we look at our input data, and we think of this as we have N training examples in our current batch, and then each batch has dimension D, we're going to the compute the empirical mean and variance independently for each dimension, so each basically feature element, and we compute this across our batch, our current mini-batch that we have and we normalize by this. And so this is usually inserted after fully connected or convolutional layers. We saw that would we were multiplying by W in these layers, which we do over and over again, then we can have this bad scaling effect with each one. And so this basically is able to undo this effect. Right, and since we're basically just scaling by the inputs connected to each neuron, each activation, we can apply this the same way to fully connected convolutional layers, and the only difference is that, with convolutional layers, we want to normalize not just across all the training examples, and independently for each each feature dimension, but we actually want to normalize jointly across both all the feature dimensions, all the spatial locations that we have in our activation map, as well as all of the training examples. And we do this, because we want to obey the convolutional property, and we want nearby locations to be normalized the same way, right? And so with a convolutional layer, we're basically going to have a one mean and one standard deviation, per activation map that that we have, and we're going to normalize by this across all of the examples in the batch. And so this is something that you guys are going to implement in your next homework. And so, all of these details are explained very clearly in this paper from 2015. And so on this is a very useful, useful technique that you want to use a lot in practice. You want to have these batch normalization layers. And so you should read this paper. Go through all of the derivations, and then also go through the derivations of how to compute the gradients with given these, this normalization operation. Okay, so one thing that I just want to point out is that, it's not clear that, you know, we're doing this batch normalization after every fully connected layer, and it's not clear that we necessarily want a unit gaussian input to these tanh nonlinearities, because what this is doing is this is constraining you to the linear regime of this nonlinearity, and we're not actually, you're trying to basically say, let's not have any of this saturation, but maybe a little bit of this is good, right? You you want to be able to control what's, how much saturation that you want to have. And so what, the way that we address this when we're doing batch normalization is that we have our normalization operation, but then after that we have this additional squashing and scaling operation. So, we do our normalization. Then we're going to scale by some constant gamma, and then shift by another factor of beta. Right, and so what this actually does is that this allows you to be able to recover the identity function if you wanted to. So, if the network wanted to, it could learn your scaling factor gamma to be just your variance. It could learn your beta to be your mean, and in this case you can recover the identity mapping, as if you didn't have batch normalization. And so now you have the flexibility of doing kind of everything in between and making your the network learning how to make your tanh more or less saturated, and how much to do so in order to have, to have good training. Okay, so just to sort of summarize the batch normalization idea. Right, so given our inputs, we're going to compute our mini-batch mean. So, we do this for every mini-batch that's coming in. We compute our variance. We normalize by the mean and variance, and we have this additional scaling and shifting factor. And so this improves gradient flow through the network. it's also more robust as a result. It works for more range of learning rates, and different kinds of initialization, so people have seen that once you put batch normalization in, and it's just easier to train, and so that's why you should do this. And then also when one thing that I just want to point out is that you can also think of this as in a way also doing some regularization. Right and so, because now at the output of each layer, each of these activations, each of these outputs, is an output of both your input X, as well as the other examples in the batch that it happens to be sampled with, right, because you're going to normalize each input data by the empirical mean over that batch. So because of that, it's no longer producing deterministic values for a given training example, and it's tying all of these inputs in a batch together. And so this basically, because it's no longer deterministic, kind of jitters your representation of X a little bit, and in a sense, gives some sort of regularization effect. Yeah, question? [student speaking off camera] The question is gamma and beta are learned parameters, and yes that's the case. [student speaking off mic] Yeah, so the question is why do we want to learn this gamma and beta to be able to learn the identity function back, and the reason is because you want to give it the flexibility. Right, so what batch normalization is doing, is it's forcing our data to become this unit gaussian, our inputs to be unit gaussian, but even though in general this is a good idea, it's not always that this is exactly the best thing to do. And we saw in particular for something like a tanh, you might want to control some degree of saturation that you have. And so what this does is it gives you the flexibility of doing this exact like unit gaussian normalization, if it wants to, but also learning that maybe in this particular part of the network, maybe that's not the best thing to do. Maybe we want something still in this general idea, but slightly different right, slightly scaled or shifted. And so these parameters just give it that extra flexibility to learn that if it wants to. And then yeah, if the the best thing to do is just batch normalization then it'll learn the right parameters for that. Yeah? [student speaking off mic] Yeah, so basically each neuron output. So, we have output of a fully connected layer. We have W times X. and so we have the values of each of these outputs, and then we're going to apply batch normalization separately to each of these neurons. Question? [student speaking off mic] Yeah, so the question is that for things like reinforcement learning, you might have a really small batch size. How do you deal with this? So in practice, I guess batch normalization has been used a lot for like for standard convolutional neural networks, and there's actually papers on how do we want to do normalization for different kinds of recurrent networks, or you know some of these networks that might also be in reinforcement learning. And so there's different considerations that you might want to think of there. And this is still an active area of research. There's papers on this and we might also talk about some of this more later, but for a typical convolutional neural network this generally works fine. And then if you have a smaller batch size, maybe this becomes a little bit less accurate, but you still get kind of the same effect. And you know it's possible also that you could design your mean and variance to be computed maybe over more examples, right, and I think in practice usually it's just okay, so you don't see this too much, but this is something that maybe could help if that was a problem. Yeah, question? [student speaking off mic] So the question, so the question is, if we force the inputs to be gaussian, do we lose the structure? So, no in a sense that you can think of like, if you had all your features distributed as a gaussian for example, even if you were just doing data pre-processing, this gaussian is not losing you any structure. All the, it's just shifting and scaling your data into a regime, that works well for the operations that you're going to perform on it. In convolutional layers, you do have some structure, that you want to preserve spatially, right. You want, like if you look at your activation maps, you want them to relatively all make sense to each other. So, in this case you do want to take that into consideration. And so now, we're going to normalize, find one mean for the entire activation map, so we only find the empirical mean and variance over training examples. And so that's something that you'll be doing in your homework, and also explained in the paper as well. So, you should refer to that. Yes. [student speaking off mic] So the question is, are we normalizing the weight so that they become gaussian. So, if I understand your question correctly, then the answer is, we're normalizing the inputs to each layer, so we're not changing the weights in this process. [student speaking off mic] Yeah, so the question is, once we subtract by the mean and divide by the standard deviation, does this become gaussian, and the answer is yes. So, if you think about the operations that are happening, basically you're shifting by the mean, right. And so this shift up to be zero-centered, and then you're scaling by the standard deviation. This now transforms this into a unit gaussian. And so if you want to look more into that, I think you can look at, there's a lot of machine learning explanations that go into exactly what this, visualizing with this operation is doing, but yeah this basically takes your data and turns it into a gaussian distribution. Okay, so yeah question? [student speaking off mic] Uh-huh. So the question is, if we're going to be doing the shift and scale, and learning these is the batch normalization redundant, because you could recover the identity mapping? So in the case that the network learns that identity mapping is always the best, and it learns these parameters, the yeah, there would be no point for batch normalization, but in practice this doesn't happen. So in practice, we will learn this gamma and beta. That's not the same as a identity mapping. So, it will shift and scale by some amount, but not the amount that's going to give you an identity mapping. And so what you get is you still get this batch normalization effect. Right, so having this identity mapping there, I'm only putting this here to say that in the extreme, it could learn the identity mapping, but in practice it doesn't. Yeah, question. [student speaking off mic] Yeah. [student speaking off mic] Oh, right, right. Yeah, yeah sorry, I was not clear about this, but yeah I think this is related to the other question earlier, that yeah when we're doing this we're actually getting zero mean and unit gaussian, which put this into a nice shape, but it doesn't have to actually be a gaussian. So yeah, I mean ideally, if we're looking at like inputs coming in, as you know, sort of approximately gaussian, we would like it to have this kind of effect, but yeah, in practice it doesn't have to be. Okay, so ... Okay, so the last thing I just want to mention about this is that, so at test time, the batch normalization layer, we now take the empirical mean and variance from the training data. So, we don't re-compute this at test time. We just estimate this at training time, for example using running averages, and then we're going to use this as at test time. So, we're just going to scale by that. Okay, so now I'm going to move on to babysitting the learning process. Right, so now we've defined our network architecture, and we'll talk about how do we monitor training, and how do we adjust hyperparameters as we go, to get good learning results? So as always, so the first step we want to do, is we want to pre-process the data. Right, so we want to zero mean the data as we talked about earlier. Then we want to choose the architecture, and so here we are starting with one hidden layer of 50 neurons, for example, but we've basically we can pick any architecture that we want to start with. And then the first thing that we want to do is we initialize our network. We do a forward pass through it, and we want to make sure that our loss is reasonable. So, we talked about this several lectures ago, where we have a basically a, let's say we have a Softmax classifier that we have here. We know what our loss should be, when our weights are small, and we have generally a diffuse distribution. Then we want it to be, the Softmax classifier loss is going to be your negative log likelihood, which if we have 10 classes, it'll be something like negative log of one over 10, which here is around 2.3, and so we want to make sure that our loss is what we expect it to be. So, this is a good sanity check that we want to always, always do. So, now once we've seen that our original loss is good, now we want to, so first we want to do this having zero regularization, right. So, when we disable the regularization, now our only loss term is this data loss, which is going to give 2.3 here. And so here, now we want to crank up the regularization, and when we do that, we want to see that our loss goes up, because we've added this additional regularization term. So, this is a good next step that you can do for your sanity check. And then, now we can start training. So, now we start trying to train. What we do is, a good way to do this is to start up with a very small amount of data, because if you have just a very small training set, you should be able to over fit this very well and get very good training loss on here. And so in this case we want to turn off our regularization again, and just see if we can make the loss go down to zero. And so we can see how our loss is changing, as we have all these epochs. We compute our loss at each epoch, and we want to see this go all the way down to zero. Right, and here we can see that also our training accuracy is going all the way up to one, and this makes sense right. If you have a very small number of data, you should be able to over fit this perfectly. Okay, so now once you've done that, these are all sanity checks. Now you can start really trying to train. So, now you can take your full training data, and now start with a small amount of regularization, and let's first figure out what's a good learning rate. So, learning rate is one of the most important hyperparameters, and it's something that you want to adjust first. So, you want to try some value of learning rate. and here I've tried one E negative six, and you can see that the loss is barely changing. Right, and so the reason this is barely changing is usually because your learning rate is too small. So when it's too small, your gradient updates are not big enough, and your cost is basically about the same. Okay, so, one thing that I want to point out here, is that we can notice that even though our loss with barely changing, the training and the validation accuracy jumped up to 20% very quickly. And so does anyone have any idea for why this might be the case? Why, so remember we have a Softmax function, and our loss didn't really change, but our accuracy improved a lot. Okay, so the reason for this is that here the probabilities are still pretty diffuse, so our loss term is still pretty similar, but when we shift all of these probabilities slightly in the right direction, because we're learning right? Our weights are changing the right direction. Now the accuracy all of a sudden can jump, because we're taking the maximum correct value, and so we're going to get a big jump in accuracy, even though our loss is still relatively diffuse. Okay, so now if we try another learning rate, now here I'm jumping in the other extreme, picking a very big learning rate, one E negative six. What's happening is that our cost is now giving us NaNs. And, when you have NaNs, what this usually means is that basically your cost exploded. And so, the reason for that is typically that your learning rate was too high. So, then you can adjust your learning rate down again. Here I can see that we're trying three E to the negative three. The cost is still exploding. So, usually this, the rough range for learning rates that we want to look at is between one E negative three, and one E negative five. And, this is the rough range that we want to be cross-validating in between. So, you want to try out values in this range, and depending on whether your loss is too slow, or too small, or whether it's too large, adjust it based on this. And so how do we exactly pick these hyperparameters? Do hyperparameter optimization, and pick the best values of all of these hyperparameters? So, the strategy that we're going to use is for any hyperparameter for example learning rate, is to do cross-validation. So, cross-validation is training on your training set, and then evaluating on a validation set. How well do this hyperparameter do? Something that you guys have already done in your assignment. And so typically we want to do this in stages. And so, we can do first of course stage, where we pick values pretty spread out apart, and then we learn for only a few epochs. And with only a few epochs. you can already get a pretty good sense of which hyperparameters, which values are good or not, right. You can quickly see that it's a NaN, or you can see that nothing is happening, and you can adjust accordingly. So, typically once you do that, then you can see what's sort of a pretty good range, and the range that you want to now do finer sampling of values in. And so, this is the second stage, where now you might want to run this for a longer time, and do a finer search over that region. And one tip for detecting explosions like NaNs, you can have in your training loop, right sample some hyperparameter, start training, and then look at your cost at every iteration or every epoch. And if you ever get a cost that's much larger than your original cost, so for example, something like three times original cost, then you know that this is not heading in the right direction. Right, it's getting very big, very quickly, and you can just break out of your loop, stop this this hyperparameter choice and pick something else. Alright, so an example of this, let's say here we want to run now course search for five epochs. This is a similar network that we were talking about earlier, and what we can do is we can see all of these validation accuracy that we're getting. And I've put in, highlighted in red the ones that gives better values. And so these are going to be regions that we're going to look into in more detail. And one thing to note is that it's usually better to optimize in log space. And so here instead of sampling, I'd say uniformly between you know one E to the negative 0.01 and 100, you're going to actually do 10 to the power of some range. Right, and this is because the learning rate is multiplying your gradient update. And so it has these multiplicative effects, and so it makes more sense to consider a range of learning rates that are multiplied or divided by some value, rather than uniformly sampled. So, you want to be dealing with orders of some magnitude here. Okay, so once you find that, you can then adjust your range. Right get in this case, we have a range of you know, maybe of 10 to the negative four, right, to 10 to the zero power. This is a good range that we want to narrow down into. And so we can do this again, and here we can see that we're getting a relatively good accuracy of 53%. And so this means we're headed in the right direction. The one thing that I want to point out is that here we actually have a problem. And so the problem is that we can see that our best accuracy here has a learning rate that's about, you know, all of our good learning rates are in this E to the negative four range. Right, and since the learning rate that we specified was going from 10 to the negative four to 10 to the zero, that means that all the good learning rates, were at the edge of the range that we were sampling. And so this is bad, because this means that we might not have explored our space sufficiently, right. We might actually want to go to 10 to the negative five, or 10 to the negative six. There might be still better ranges if we continue shifting down. So, you want to make sure that your range kind of has the good values somewhere in the middle, or somewhere where you get a sense that you've hit, you've explored your range fully. Okay, and so another thing is that we can sample all of our different hyperparameters, using a kind of grid search, right. We can sample for a fixed set of combinations, a fixed set of values for each hyperparameter. Sample in a grid manner over all of these values, but in practice it's actually better to sample from a random layout, so sampling random value of each hyperparameter in a range. And so what you'll get instead is we'll have these two hyper parameters here that we want to sample from. You'll get samples that look like this right side instead. And the reason for this is that if a function is really sort of more of a function of one variable than another, which is usually true. Usually we have little bit more, a lower effective dimensionality than we actually have. Then you're going to get many more samples of the important variable that you have. You're going to be able to see this shape in this green function that I've drawn on top, showing where the good values are, compared to if you just did a grid layout where we were only able to sample three values here, and you've missed where were the good regions. Right, and so basically we'll get much more useful signal overall since we have more samples of different values of the important variable. And so, hyperparameters to play with, we've talked about learning rate, things like different types of decay schedules, update types, regularization, also your network architecture, so the number of hidden units, the depth, all of these are hyperparameters that you can optimize over. And we've talked about some of these, but we'll keep talking about more of these in the next lecture. And so you can think of this as kind of, you know, if you're basically tuning all the knobs right, of some turntable where you're, you're a neural networks practitioner. You can think of the music that's output is the loss function that you want, and you want to adjust everything appropriately to get the kind of output that you want. Alright, so it's really kind of an art that you're doing. And in practice, you're going to do a lot of hyperparameter optimization, a lot of cross validation. And so you know, in order to get numbers, people will run cross validation over tons of hyperparameters, monitor all of them, see which ones are doing better, which ones are doing worse. Here we have all these loss curves. Pick the right ones, readjust, and keep going through this process. And so as I mentioned earlier, as you're monitoring each of these loss curves, learning rate is an important one, but you'll get a sense for how different learning rates, which learning rates are good and bad. So you'll see that if you have a very high exploding one, right, this is your loss explodes, then your learning rate is too high. If it's too kind of linear and too flat, you'll see that it's too low, it's not changing enough. And if you get something that looks like there's a steep change, but then a plateau, this is also an indicator of it being maybe too high, because in this case, you're taking too large jumps, and you're not able to settle well into your local optimum. And so a good learning rate usually ends up looking something like this, where you have a relatively steep curve, but then it's continuing to go down, and then you might keep adjusting your learning rate from there. And so this is something that you'll see through practice. Okay and just, I think we're very close to the end, so just one last thing that I want to point out is than in case you ever see learning rate loss curves, where it's ... So if you ever see loss curves where it's flat for a while, and then starts training all of a sudden, a potential reason could be bad initialization. So in this case, your gradients are not really flowing too well the beginning, so nothing's really learning, and then at some point, it just happens to adjust in the right way, such that it tips over and things just start training right? And so there's a lot of experience at looking at these and see what's wrong that you'll get over time. And so you'll usually want to monitor and visualize your accuracy. If you have a big gap between your training accuracy and your validation accuracy, it usually means that you might have overfitting and you might want to increase your regularization strength. If you have no gap, you might want to increase your model capacity, because you haven't overfit yet. You could potentially increase it more. And in general, we also want to track the updates, the ratio of our weight updates to our weight magnitudes. We can just take the norm of our parameters that we have to get a sense for how large they are, and when we have our update size, we can also take the norm of that, get a sense for how large that is, and we want this ratio to be somewhere around 0.001. There's a lot of variance in this range, so you don't have to be exactly on this, but it's just this sense of you don't want your updates to be too large compared to your value or too small, right? You don't want to dominate or to have no effect. And so this is just something that can help debug what might be a problem. Okay, so in summary, today we've looked at activation functions, data preprocessing, weight initialization, batch norm, babysitting the learning process, and hyperparameter optimization. These are the kind of the takeaways for each that you guys should keep in mind. Use ReLUs, subtract the mean, use Xavier Initialization, use batch norm, and sample hyperparameters randomly. And next time we'll continue to talk about the training neural networks with all these different topics. Thanks.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_7_Training_Neural_Networks_II.txt
- Okay, it's after 12, so I think we should get started. Today we're going to kind of pick up where we left off last time. Last time we talked about a lot of sort of tips and tricks involved in the nitty gritty details of training neural networks. Today we'll pick up where we left off, and talk about a lot more of these sort of nitty gritty details about training these things. As usual, a couple administrative notes before we get into the material. As you all know, assignment one is already due. Hopefully you all turned it in. Did it go okay? Was it not okay? Rough sentiment? Mostly okay. Okay, that's good. Awesome. [laughs] We're in the process of grading those, so stay turned. We're hoping to get grades back for those before A two is due. Another reminder, that your project proposals are due tomorrow. Actually, no, today at 11:59. Make sure you send those in. Details are on the website and on Piazza. Also a reminder, assignment two is already out. That'll be due a week from Thursday. Historically, assignment two has been the longest one in the class, so if you haven't started already on assignment two, I'd recommend you take a look at that pretty soon. Another reminder is that for assignment two, I think of a lot of you will be using Google Cloud. Big reminder, make sure to stop your instances when you're not using them because whenever your instance is on, you get charged, and we only have so many coupons to distribute to you guys. Anytime your instance is on, even if you're not SSH to it, even if you're not running things immediately in your Jupyter Notebook, any time that instance is on, you're going to be charged. Just make sure that you explicitly stop your instances when you're not using them. In this example, I've got a little screenshot of my dashboard on Google Cloud. I need to go in there and explicitly go to the dropdown and click stop. Just make sure that you do this when you're done working each day. Another thing to remember is it's kind of up to you guys to keep track of your spending on Google Cloud. In particular, instances that use GPUs are a lot more expensive than those with CPUs. Rough order of magnitude, those GPU instances are around 90 cents to a dollar an hour. Those are actually quite pricey. The CPU instances are much cheaper. The general strategy is that you probably want to make two instances, one with a GPU and one without, and then only use that GPU instance when you really need the GPU. For example, on assignment two, most of the assignment, you should only need the CPU, so you should only use your CPU instance for that. But then the final question, about TensorFlow or PyTorch, that will need a GPU. This'll give you a little bit of practice with switching between multiple instances and only using that GPU when it's really necessary. Again, just kind of watch your spending. Try not to go too crazy on these things. Any questions on the administrative stuff before we move on? Question. - [Student] How much RAM should we use? - Question is how much RAM should we use? I think eight or 16 gigs is probably good for everything that you need in this class. As you scale up the number of CPUs and the number of RAM, you also end up spending more money. If you stick with two or four CPUs and eight or 16 gigs of RAM, that should be plenty for all the homework-related stuff that you need to do. As a quick recap, last time we talked about activation functions. We talked about this whole zoo of different activation functions and some of their different properties. We saw that the sigmoid, which used to be quite popular when training neural networks maybe 10 years ago or so, has this problem with vanishing gradients near the two ends of the activation function. tanh has this similar sort of problem. Kind of the general recommendation is that you probably want to stick with ReLU for most cases as sort of a default choice 'cause it tends to work well for a lot of different architectures. We also talked about weight initialization. Remember that up on the top, we have this idea that when you initialize your weights at the start of training, if those weights are initialized to be too small, then if you look at, then the activations will vanish as you go through the network because as you multiply by these small numbers over and over again, they'll all sort of decay to zero. Then everything will be zero, learning won't happen, you'll be sad. On the other hand, if you initialize your weights too big, then as you go through the network and multiply by your weight matrix over and over again, eventually they'll explode. You'll be unhappy, there'll be no learning, it will be very bad. But if you get that initialization just right, for example, using the Xavier initialization or the MSRA initialization, then you kind of keep a nice distribution of activations as you go through the network. Remember that this kind of gets more and more important and more and more critical as your networks get deeper and deeper because as your network gets deeper, you're multiplying by those weight matrices over and over again with these more multiplicative terms. We also talked last time about data preprocessing. We talked about how it's pretty typical in conv nets to zero center and normalize your data so it has zero mean and unit variance. I wanted to provide a little bit of extra intuition about why you might actually want to do this. Imagine a simple setup where we have a binary classification problem where we want to draw a line to separate these red points from these blue points. On the left, you have this idea where if those data points are kind of not normalized and not centered and far away from the origin, then we can still use a line to separate them, but now if that line wiggles just a little bit, then our classification is going to get totally destroyed. That kind of means that in the example on the left, the loss function is now extremely sensitive to small perturbations in that linear classifier in our weight matrix. We can still represent the same functions, but that might make learning quite difficult because, again, their loss is very sensitive to our parameter vector, whereas in the situation on the right, if you take that data cloud and you move it into the origin and you make it unit variance, then now, again, we can still classify that data quite well, but now as we wiggle that line a little bit, then our loss function is less sensitive to small perturbations in the parameter values. That maybe makes optimization a little bit easier, as we'll see a little bit going forward. By the way, this situation is not only in the linear classification case. Inside a neural network, remember we kind of have these interleavings of these linear matrix multiplies, or convolutions, followed by non-linear activation functions. If the input to some layer in your neural network is not centered or not zero mean, not unit variance, then again, small perturbations in the weight matrix of that layer of the network could cause large perturbations in the output of that layer, which, again, might make learning difficult. This is kind of a little bit of extra intuition about why normalization might be important. Because we have this intuition that normalization is so important, we talked about batch normalization, which is where we just add this additional layer inside our networks to just force all of the intermediate activations to be zero mean and unit variance. I've sort of resummarized the batch normalization equations here with the shapes a little bit more explicitly. Hopefully this can help you out when you're implementing this thing on assignment two. But again, in batch normalization, we have this idea that in the forward pass, we use the statistics of the mini batch to compute a mean and a standard deviation, and then use those estimates to normalize our data on the forward pass. Then we also reintroduce the scale and shift parameters to increase the expressivity of the layer. You might want to refer back to this when working on assignment two. We also talked last time a little bit about babysitting the learning process, how you should probably be looking at your loss curves during training. Here's an example of some networks I was actually training over the weekend. This is usually my setup when I'm working on these things. On the left, I have some plot showing the training loss over time. You can see it's kind of going down, which means my network is reducing the loss. It's doing well. On the right, there's this plot where the X axis is, again, time, or the iteration number, and the Y axis is my performance measure both on my training set and on my validation set. You can see that as we go over time, then my training set performance goes up and up and up and up and up as my loss function goes down, but at some point, my validation set performance kind of plateaus. This kind of suggests that maybe I'm overfitting in this situation. Maybe I should have been trying to add additional regularization. We also talked a bit last time about hyperparameter search. All these networks have sort of a large zoo of hyperparameters. It's pretty important to set them correctly. We talked a little bit about grid search versus random search, and how random search is maybe a little bit nicer in theory because in the situation where your performance might be more sensitive, with respect to one hyperparameter than other, and random search lets you cover that space a little bit better. We also talked about the idea of coarse to fine search, where when you're doing this hyperparameter optimization, probably you want to start with very wide ranges for your hyperparameters, only train for a couple iterations, and then based on those results, you kind of narrow in on the range of hyperparameters that are good. Now, again, redo your search in a smaller range for more iterations. You can kind of iterate this process to kind of hone in on the right region for hyperparameters. But again, it's really important to, at the start, have a very coarse range to start with, where you want very, very wide ranges for all your hyperparameters. Ideally, those ranges should be so wide that your network is kind of blowing up at either end of the range so that you know that you've searched a wide enough range for those things. Question? - [Student] How many [speaks too low to hear] optimize at once? [speaks too low to hear] - The question is how many hyperparameters do we typically search at a time? Here is two, but there's a lot more than two in these typical things. It kind of depends on the exact model and the exact architecture, but because the number of possibilities is exponential in the number of hyperparameters, you can't really test too many at a time. It also kind of depends on how many machines you have available. It kind of varies from person to person and from experiment to experiment. But generally, I try not to do this over more than maybe two or three or four at a time at most because, again, this exponential search just gets out of control. Typically, learning rate is the really important one that you need to nail first. Then other things, like regularization, like learning rate decay, model size, these other types of things tend to be a little bit less sensitive than learning rate. Sometimes you might do kind of a block coordinate descent, where you go and find the good learning rate, then you go back and try to look at different model sizes. This can help you cut down on the exponential search a little bit, but it's a little bit problem dependent on exactly which ones you should be searching over in which order. More questions? - [Student] [speaks too low to hear] Another parameter, but then changing that other parameter, two or three other parameters, makes it so that your learning rate or the ideal learning rate is still [speaks too low to hear]. - Question is how often does it happen where when you change one hyperparameter, then the other, the optimal values of the other hyperparameters change? That does happen sometimes, although for learning rates, that's typically less of a problem. For learning rates, typically you want to get in a good range, and then set it maybe even a little bit lower than optimal, and let it go for a long time. Then if you do that, combined with some of the fancier optimization strategies that we'll talk about today, then a lot of models tend to be a little bit less sensitive to learning rate once you get them in a good range. Sorry, did you have a question in front, as well? - [Student] [speaks too low to hear] - The question is what's wrong with having a small learning rate and increasing the number of epochs? The answer is that it might take a very long time. [laughs] - [Student] [speaks too low to hear] - Intuitively, if you set the learning rate very low and let it go for a very long time, then this should, in theory, always work. But in practice, those factors of 10 or 100 actually matter a lot when you're training these things. Maybe if you got the right learning rate, you could train it in six hours, 12 hours or a day, but then if you just were super safe and dropped it by a factor of 10 or by a factor of 100, now that one-day training becomes 100 days of training. That's three months. That's not going to be good. When you're taking these intro computer science classes, they always kind of sweep the constants under the rug, but when you're actually thinking about training things, those constants end up mattering a lot. Another question? - [Student] If you have a low learning rate, [speaks too low to hear]. - Question is for a low learning rate, are you more likely to be stuck in local optima? I think that makes some intuitive sense, but in practice, that seems not to be much of a problem. I think we'll talk a bit more about that later today. Today I wanted to talk about a couple other really interesting and important topics when we're training neural networks. In particular, I wanted to talk, we've kind of alluded to this fact of fancier, more powerful optimization algorithms a couple times. I wanted to spend some time today and really dig into those and talk about what are the actual optimization algorithms that most people are using these days. We also touched on regularization in earlier lectures. This concept of making your network do additional things to reduce the gap between train and test error. I wanted to talk about some more strategies that people are using in practice of regularization, with respect to neural networks. Finally, I also wanted to talk a bit about transfer learning, where you can sometimes get away with using less data than you think by transferring from one problem to another. If you recall from a few lectures ago, the kind of core strategy in training neural networks is an optimization problem where we write down some loss function, which defines, for each value of the network weights, the loss function tells us how good or bad is that value of the weights doing on our problem. Then we imagine that this loss function gives us some nice landscape over the weights, where on the right, I've shown this maybe small, two-dimensional problem, where the X and Y axes are two values of the weights. Then the color of the plot kind of represents the value of the loss. In this kind of cartoon picture of a two-dimensional problem, we're only optimizing over these two values, W one, W two. The goal is to find the most red region in this case, which corresponds to the setting of the weights with the lowest loss. Remember, we've been working so far with this extremely simple optimization algorithm, stochastic gradient descent, where it's super simple, it's three lines. While true, we first evaluate the loss in the gradient on some mini batch of data. Then we step, updating our parameter vector in the negative direction of the gradient because this gives, again, the direction of greatest decrease of the loss function. Then we repeat this over and over again, and hopefully we converge to the red region and we get great errors and we're very happy. But unfortunately, this relatively simple optimization algorithm has quite a lot of problems that actually could come up in practice. One problem with stochastic gradient descent, imagine what happens if our objective function looks something like this, where, again, we're plotting two values, W one and W two. As we change one of those values, the loss function changes very slowly. As we change the horizontal value, then our loss changes slowly. As we go up and down in this landscape, now our loss is very sensitive to changes in the vertical direction. By the way, this is referred to as the loss having a bad condition number at this point, which is the ratio between the largest and smallest singular values of the Hessian matrix at that point. But the intuitive idea is that the loss landscape kind of looks like a taco shell. It's sort of very sensitive in one direction, not sensitive in the other direction. The question is what might SGD, stochastic gradient descent, do on a function that looks like this? If you run stochastic gradient descent on this type of function, you might get this characteristic zigzagging behavior, where because for this type of objective function, the direction of the gradient does not align with the direction towards the minima. When you compute the gradient and take a step, you might step sort of over this line and sort of zigzag back and forth. In effect, you get very slow progress along the horizontal dimension, which is the less sensitive dimension, and you get this zigzagging, nasty, nasty zigzagging behavior across the fast-changing dimension. This is undesirable behavior. By the way, this problem actually becomes much more common in high dimensions. In this kind of cartoon picture, we're only showing a two-dimensional optimization landscape, but in practice, our neural networks might have millions, tens of millions, hundreds of millions of parameters. That's hundreds of millions of directions along which this thing can move. Now among those hundreds of millions of different directions to move, if the ratio between the largest one and the smallest one is bad, then SGD will not perform so nicely. You can imagine that if we have 100 million parameters, probably the maximum ratio between those two will be quite large. I think this is actually quite a big problem in practice for many high-dimensional problems. Another problem with SGD has to do with this idea of local minima or saddle points. Here I've sort of swapped the graph a little bit. Now the X axis is showing the value of one parameter, and then the Y axis is showing the value of the loss. In this top example, we have kind of this curvy objective function, where there's a valley in the middle. What happens to SGD in this situation? - [Student] [speaks too low to hear] - In this situation, SGD will get stuck because at this local minima, the gradient is zero because it's locally flat. Now remember with SGD, we compute the gradient and step in the direction of opposite gradient, so if at our current point, the opposite gradient is zero, then we're not going to make any progress, and we'll get stuck at this point. There's another problem with this idea of saddle points. Rather than being a local minima, you can imagine a point where in one direction we go up, and in the other direction we go down. Then at our current point, the gradient is zero. Again, in this situation, the function will get stuck at the saddle point because the gradient is zero. Although one thing I'd like to point out is that in one dimension, in a one-dimensional problem like this, local minima seem like a big problem and saddle points seem like kind of not something to worry about, but in fact, it's the opposite once you move to very high-dimensional problems because, again, if you think about you're in this 100 million dimensional space, what does a saddle point mean? That means that at my current point, some directions the loss goes up, and some directions the loss goes down. If you have 100 million dimensions, that's probably going to happen more frequently than, that's probably going to happen almost everywhere, basically. Whereas a local minima says that of all those 100 million directions that I can move, every one of them causes the loss to go up. In fact, that seems pretty rare when you're thinking about, again, these very high-dimensional problems. Really, the idea that has come to light in the last few years is that when you're training these very large neural networks, the problem is more about saddle points and less about local minima. By the way, this also is a problem not just exactly at the saddle point, but also near the saddle point. If you look at the example on the bottom, you see that in the regions around the saddle point, the gradient isn't zero, but the slope is very small. That means that if we're, again, just stepping in the direction of the gradient, and that gradient is very small, we're going to make very, very slow progress whenever our current parameter value is near a saddle point in the objective landscape. This is actually a big problem. Another problem with SGD comes from the S. Remember that SGD is stochastic gradient descent. Recall that our loss function is typically defined by computing the loss over many, many different examples. In this case, if N is your whole training set, then that could be something like a million. Each time computing the loss would be very, very expensive. In practice, remember that we often estimate the loss and estimate the gradient using a small mini batch of examples. What this means is that we're not actually getting the true information about the gradient at every time step. Instead, we're just getting some noisy estimate of the gradient at our current point. Here on the right, I've kind of faked this plot a little bit. I've just added random uniform noise to the gradient at every point, and then run SGD with these noisy, messed up gradients. This is maybe not exactly what happens with the SGD process, but it still give you the sense that if there's noise in your gradient estimates, then vanilla SGD kind of meanders around the space and might actually take a long time to get towards the minima. Now that we've talked about a lot of these problems. Sorry, was there a question? - [Student] [speaks too low to hear] - The question is do all of these just go away if we use normal gradient descent? Let's see. I think that the taco shell problem of high condition numbers is still a problem with full batch gradient descent. The noise. As we'll see, we might sometimes introduce additional noise into the network, not only due to sampling mini batches, but also due to explicit stochasticity in the network, so we'll see that later. That can still be a problem. Saddle points, that's still a problem for full batch gradient descent because there can still be saddle points in the full objective landscape. Basically, even if we go to full batch gradient descent, it doesn't really solve these problems. We kind of need to think about a slightly fancier optimization algorithm that can try to address these concerns. Thankfully, there's a really, really simple strategy that works pretty well at addressing many of these problems. That's this idea of adding a momentum term to our stochastic gradient descent. Here on the left, we have our classic old friend, SGD, where we just always step in the direction of the gradient. But now on the right, we have this minor, minor variance called SGD plus momentum, which is now two equations and five lines of code, so it's twice as complicated. But it's very simple. The idea is that we maintain a velocity over time, and we add our gradient estimates to the velocity. Then we step in the direction of the velocity, rather than stepping in the direction of the gradient. This is very, very simple. We also have this hyperparameter rho now which corresponds to friction. Now at every time step, we take our current velocity, we decay the current velocity by the friction constant, rho, which is often something high, like .9 is a common choice. We take our current velocity, we decay it by friction and we add in our gradient. Now we step in the direction of our velocity vector, rather than the direction of our raw gradient vector. This super, super simple strategy actually helps for all of these problems that we just talked about. If you think about what happens at local minima or saddle points, then if we're imagining velocity in this system, then you kind of have this physical interpretation of this ball kind of rolling down the hill, picking up speed as it comes down. Now once we have velocity, then even when we pass that point of local minima, the point will still have velocity, even if it doesn't have gradient. Then we can hopefully get over this local minima and continue downward. There's this similar intuition near saddle points, where even though the gradient around the saddle point is very small, we have this velocity vector that we've built up as we roll downhill. That can hopefully carry us through the saddle point and let us continue rolling all the way down. If you think about what happens in poor conditioning, now if we were to have these kind of zigzagging approximations to the gradient, then those zigzags will hopefully cancel each other out pretty fast once we're using momentum. This will effectively reduce the amount by which we step in the sensitive direction, whereas in the horizontal direction, our velocity will just keep building up, and will actually accelerate our descent across that less sensitive dimension. Adding momentum here can actually help us with this high condition number problem, as well. Finally, on the right, we've repeated the same visualization of gradient descent with noise. Here, the black is this vanilla SGD, which is sort of zigzagging all over the place, where the blue line is showing now SGD with momentum. You can see that because we're adding it, we're building up this velocity over time, the noise kind of gets averaged out in our gradient estimates. Now SGD ends up taking a much smoother path towards the minima, compared with the SGD, which is kind of meandering due to noise. Question? - [Student] [speaks too low to hear] - The question is how does SGD momentum help with the poorly conditioned coordinate? The idea is that if you go back and look at this velocity estimate and look at the velocity computation, we're adding in the gradient at every time step. It kind of depends on your setting of rho, that hyperparameter, but you can imagine that if the gradient is relatively small, and if rho is well behaved in this situation, then our velocity could actually monotonically increase up to a point where the velocity could now be larger than the actual gradient. Then we might actually make faster progress along the poorly conditioned dimension. Kind of one picture that you can have in mind when we're doing SGD plus momentum is that the red here is our current point. At our current point, we have some red vector, which is the direction of the gradient, or rather our estimate of the gradient at the current point. Green is now the direction of our velocity vector. Now when we do the momentum update, we're actually stepping according to a weighted average of these two. This helps overcome some noise in our gradient estimate. There's a slight variation of momentum that you sometimes see, called Nesterov accelerated gradient, also sometimes called Nesterov momentum. That switches up this order of things a little bit. In sort of normal SGD momentum, we imagine that we estimate the gradient at our current point, and then take a mix of our velocity and our gradient. With Nesterov accelerated gradient, you do something a little bit different. Here, you start at the red point. You step in the direction of where the velocity would take you. You evaluate the gradient at that point. Then you go back to your original point and kind of mix together those two. This is kind of a funny interpretation, but you can imagine that you're kind of mixing together information a little bit more. If your velocity direction was actually a little bit wrong, it lets you incorporate gradient information from a little bit larger parts of the objective landscape. This also has some really nice theoretical properties when it comes to convex optimization, but those guarantees go a little bit out the window once it comes to non-convex problems like neural networks. Writing it down in equations, Nesterov momentum looks something like this, where now to update our velocity, we take a step, according to our previous velocity, and evaluate that gradient there. Now when we take our next step, we actually step in the direction of our velocity that's incorporating information from these multiple points. Question? - [Student] [speaks too low to hear] - Oh, sorry. The question is what's a good initialization for the velocity? This is almost always zero. It's not even a hyperparameter. Just set it to zero and don't worry. Another question? - [Student] [speaks too low to hear] - Intuitively, the velocity is kind of a weighted sum of your gradients that you've seen over time. - [Student] [speaks too low to hear] - With more recent gradients being weighted heavier. At every time step, we take our old velocity, we decay by friction and we add in our current gradient. You can kind of think of this as a smooth moving average of your recent gradients with kind of a exponentially decaying weight on your gradients going back in time. This Nesterov formulation is a little bit annoying 'cause if you look at this, normally when you have your loss function, you want to evaluate your loss and your gradient at the same point. Nesterov breaks this a little bit. It's a little bit annoying to work with. Thankfully, there's a cute change of variables you can do. If you do the change of variables and reshuffle a little bit, then you can write Nesterov momentum in a slightly different way that now, again, lets you evaluate the loss and the gradient at the same point always. Once you make this change of variables, you get kind of a nice interpretation of Nesterov, which is that here in the first step, this looks exactly like updating the velocity in the vanilla SGD momentum case, where we have our current velocity, we evaluate gradient at the current point and mix these two together in a decaying way. Now in the second update, now when we're actually updating our parameter vector, if you look at the second equation, we have our current point plus our current velocity plus a weighted difference between our current velocity and our previous velocity. Here, Nesterov momentum is kind of incorporating some kind of error-correcting term between your current velocity and your previous velocity. If we look at SGD, SGD momentum and Nesterov momentum on this kind of simple problem, compared with SGD, we notice that SGD kind of takes this, SGD is in the black, kind of taking this slow progress toward the minima. The blue and the green show momentum and Nesterov. These have this behavior of kind of overshooting the minimum 'cause they're building up velocity going past the minimum, and then kind of correcting themselves and coming back towards the minima. Question? - [Student] [speaks too low to hear] - The question is this picture looks good, but what happens if your minima call lies in this very narrow basin? Will the velocity just cause you to skip right over that minima? That's actually a really interesting point, and the subject of some recent theoretical work, but the idea is that maybe those really sharp minima are actually bad minima. We don't want to even land in those 'cause the idea is that maybe if you have a very sharp minima, that actually could be a minima that overfits more. If you imagine that we doubled our training set, the whole optimization landscape would change, and maybe that very sensitive minima would actually disappear if we were to collect more training data. We kind of have this intuition that we maybe want to land in very flat minima because those very flat minima are probably more robust as we change the training data. Those flat minima might actually generalize better to testing data. This is again, sort of very recent theoretical work, but that's actually a really good point that you bring it up. In some sense, it's actually a feature and not a bug that SGD momentum actually skips over those very sharp minima. That's actually a good thing, believe it or not. Another thing you can see is if you look at the difference between momentum and Nesterov here, you can see that because of the correction factor in Nesterov, maybe it's not overshooting quite as drastically, compared to vanilla momentum. There's another kind of common optimization strategy is this algorithm called AdaGrad, which John Duchi, who's now a professor here, worked on during his Ph.D. The idea with AdaGrad is that as you, during the course of the optimization, you're going to keep a running estimate or a running sum of all the squared gradients that you see during training. Now rather than having a velocity term, instead we have this grad squared term. During training, we're going to just keep adding the squared gradients to this grad squared term. Now when we update our parameter vector, we'll divide by this grad squared term when we're making our update step. The question is what does this kind of scaling do in this situation where we have a very high condition number? - [Student] [speaks too low to hear] - The idea is that if we have two coordinates, one that always has a very high gradient and one that always has a very small gradient, then as we add the sum of the squares of the small gradient, we're going to be dividing by a small number, so we'll accelerate movement along the slow dimension, along the one dimension. Then along the other dimension, where the gradients tend to be very large, then we'll be dividing by a large number, so we'll kind of slow down our progress along the wiggling dimension. But there's kind of a problem here. That's the question of what happens with AdaGrad over the course of training, as t gets larger and larger and larger? - [Student] [speaks too low to hear] - With AdaGrad, the steps actually get smaller and smaller and smaller because we just continue updating this estimate of the squared gradients over time, so this estimate just grows and grows and grows monotonically over the course of training. Now this causes our step size to get smaller and smaller and smaller over time. Again, in the convex case, there's some really nice theory showing that this is actually really good 'cause in the convex case, as you approach a minimum, you kind of want to slow down so you actually converge. That's actually kind of a feature in the convex case. But in the non-convex case, that's a little bit problematic because as you come towards a saddle point, you might get stuck with AdaGrad, and then you kind of no longer make any progress. There's a slight variation of AdaGrad, called RMSProp, that actually addresses this concern a little bit. Now with RMSProp, we still keep this estimate of the squared gradients, but instead of just letting that squared estimate continually accumulate over training, instead, we let that squared estimate actually decay. This ends up looking kind of like a momentum update, except we're having kind of momentum over the squared gradients, rather than momentum over the actual gradients. Now with RMSProp, after we compute our gradient, we take our current estimate of the grad squared, we multiply it by this decay rate, which is commonly something like .9 or .99. Then we add in this one minus the decay rate of our current squared gradient. Now over time, you can imagine that. Then again, when we make our step, the step looks exactly the same as AdaGrad, where we divide by the squared gradient in the step to again have this nice property of accelerating movement along the one dimension, and slowing down movement along the other dimension. But now with RMSProp, because these estimates are leaky, then it kind of addresses the problem of maybe always slowing down where you might not want to. Here again, we're kind of showing our favorite toy problem with SGD in black, SGD momentum in blue and RMSProp in red. You can see that RMSProp and SGD momentum are both doing much better than SGD, but their qualitative behavior is a little bit different. With SGD momentum, it kind of overshoots the minimum and comes back, whereas with RMSProp, it's kind of adjusting its trajectory such that we're making approximately equal progress among all the dimensions. By the way, you can't actually tell, but this plot is also showing AdaGrad in green with the same learning rate, but it just gets stuck due to this problem of continually decaying learning rates. In practice, AdaGrad is maybe not so common for many of these things. That's a little bit of an unfair comparison of AdaGrad. Probably you need to increase the learning rate with AdaGrad, and then it would end up looking kind of like RMSProp in this case. But in general, we tend not to use AdaGrad so much when training neural networks. Question? - [Student] [speaks too low to hear] - The answer is yes, this problem is convex, but in this case, it's a little bit of an unfair comparison because the learning rates are not so comparable among the methods. I've been a little bit unfair to AdaGrad in this visualization by showing the same learning rate between the different algorithms, when probably you should have separately turned the learning rates per algorithm. We saw in momentum, we had this idea of velocity, where we're building up velocity by adding in the gradients, and then stepping in the direction of the velocity. We saw with AdaGrad and RMSProp that we had this other idea, of building up an estimate of the squared gradients, and then dividing by the squared gradients. Then these both seem like good ideas on their own. Why don't we just stick 'em together and use them both? Maybe that would be even better. That brings us to this algorithm called Adam, or rather brings us very close to Adam. We'll see in a couple slides that there's a slight correction we need to make here. Here with Adam, we maintain an estimate of the first moment and the second moment. Now in the red, we make this estimate of the first moment as a weighed sum of our gradients. We have this moving estimate of the second moment, like AdaGrad and like RMSProp, which is a moving estimate of our squared gradients. Now when we make our update step, we step using both the first moment, which is kind of our velocity, and also divide by the second moment, or rather the square root of the second moment, which is this squared gradient term. This idea of Adam ends up looking a little bit like RMSProp plus momentum, or ends up looking like momentum plus second squared gradients. It kind of incorporates the nice properties of both. But there's a little bit of a problem here. That's the question of what happens at the very first time step? At the very first time step, you can see that at the beginning, we've initialized our second moment with zero. Now after one update of the second moment, typically this beta two, second moment decay rate, is something like .9 or .99, something very close to one. After one update, our second moment is still very, very close to zero. Now when we're making our update step here and we divide by our second moment, now we're dividing by a very small number. We're making a very, very large step at the beginning. This very, very large step at the beginning is not really due to the geometry of the problem. It's kind of an artifact of the fact that we initialized our second moment estimate was zero. Question? - [Student] [speaks too low to hear] - That's true. The comment is that if your first moment is also very small, then you're multiplying by small and you're dividing by square root of small squared, so what's going to happen? They might cancel each other out, you might be okay. That's true. Sometimes these cancel each other out and you're okay, but sometimes this ends up in taking very large steps right at the beginning. That can be quite bad. Maybe you initialize a little bit poorly. You take a very large step. Now your initialization is completely messed up, and then you're in a very bad part of the objective landscape and you just can't converge from there. Question? - [Student] [speaks too low to hear] - The idea is what is this 10 to the minus seven term in the last equation? That's actually appeared in AdaGrad, RMSProp and Adam. The idea is that we're dividing by something. We want to make sure we're not dividing by zero, so we always add a small positive constant to the denominator, just to make sure we're not dividing by zero. That's technically a hyperparameter, but it tends not to matter too much, so just setting 10 to minus seven, 10 to minus eight, something like that, tends to work well. With Adam, remember we just talked about this idea of at the first couple steps, it gets very large, and we might take very large steps and mess ourselves up. Adam also adds this bias correction term to avoid this problem of taking very large steps at the beginning. You can see that after we update our first and second moments, we create an unbiased estimate of those first and second moments by incorporating the current time step, t. Now we actually make our step using these unbiased estimates, rather than the original first and second moment estimates. This gives us our full form of Adam. By the way, Adam is a really, [laughs] really good optimization algorithm, and it works really well for a lot of different problems, so that's kind of my default optimization algorithm for just about any new problem that I'm tackling. In particular, if you set beta one equals .9, beta two equals .999, learning rate one e minus three or five e minus four, that's a great staring point for just about all the architectures I've ever worked with. Try that. That's a really good place to start, in general. [laughs] If we actually plot these things out and look at SGD, SGD momentum, RMSProp and Adam on the same problem, you can see that Adam, in the purple here, kind of combines elements of both SGD momentum and RMSProp. Adam kind of overshoots the minimum a little bit like SGD momentum, but it doesn't overshoot quite as much as momentum. Adam also has this similar behavior of RMSProp of kind of trying to curve to make equal progress along all dimensions. Maybe in this small two-dimensional example, Adam converged about similarly to other ones, but you can see qualitatively that it's kind of combining the behaviors of both momentum and RMSProp. Any questions about optimization algorithms? - [Student] [speaks too low to hear] They still take a very long time to train. [speaks too low to hear] - The question is what does Adam not fix? Would these neural networks are still large, they still take a long time to train. There can still be a problem. In this picture where we have this landscape of things looking like ovals, if you imagine that we're kind of making estimates along each dimension independently to allow us to speed up or slow down along different coordinate axes, but one problem is that if that taco shell is kind of tilted and is not axis aligned, then we're still only making estimates along the individual axes independently. That corresponds to taking your rotated taco shell and squishing it horizontally and vertically, but you can't actually unrotate it. In cases where you have this kind of rotated picture of poor conditioning, then Adam or any of these other algorithms really can't address that, that concern. Another thing that we've seen in all these optimization algorithms is learning rate as a hyperparameter. We've seen this picture before a couple times, that as you use different learning rates, sometimes if it's too high, it might explode in the yellow. If it's a very low learning rate, in the blue, it might take a very long time to converge. It's kind of tricky to pick the right learning rate. This is a little bit of a trick question because we don't actually have to stick with one learning rate throughout the course of training. Sometimes you'll see people decay the learning rates over time, where we can kind of combine the effects of these different curves on the left, and get the nice properties of each. Sometimes you'll start with a higher learning rate near the start of training, and then decay the learning rate and make it smaller and smaller throughout the course of training. A couple strategies for these would be a step decay, where at 100,000th iteration, you just decay by some factor and you keep going. You might see an exponential decay, where you continually decay during training. You might see different variations of continually decaying the learning rate during training. If you look at papers, especially the resonate paper, you often see plots that look kind of like this, where the loss is kind of going down, then dropping, then flattening again, then dropping again. What's going on in these plots is that they're using a step decay learning rate, where at these parts where it plateaus and then suddenly drops again, those are the iterations where they dropped the learning rate by some factor. This idea of dropping the learning rate, you might imagine that it got near some good region, but now the gradients got smaller, it's kind of bouncing around too much. Then if we drop the learning rate, it lets it slow down and continue to make progress down the landscape. This tends to help in practice sometimes. Although one thing to point out is that learning rate decay is a little bit more common with SGD momentum, and a little bit less common with something like Adam. Another thing I'd like to point out is that learning rate decay is kind of a second-order hyperparameter. You typically should not optimize over this thing from the start. Usually when you're kind of getting networks to work at the beginning, you want to pick a good learning rate with no learning rate decay from the start. Trying to cross-validate jointly over learning rate decay and initial learning rate and other things, you'll just get confused. What you do for setting learning rate decay is try with no decay, see what happens. Then kind of eyeball the loss curve and see where you think you might need decay. Another thing I wanted to mention briefly is this idea of all these algorithms that we've talked about are first-order optimization algorithms. In this picture, in this one-dimensional picture, we have this kind of curvy objective function at our current point in red. What we're basically doing is computing the gradient at that point. We're using the gradient information to compute some linear approximation to our function, which is kind of a first-order Taylor approximation to our function. Now we pretend that the first-order approximation is our actual function, and we make a step to try to minimize the approximation. But this approximation doesn't hold for very large regions, so we can't step too far in that direction. But really, the idea here is that we're only incorporating information about the first derivative of the function. You can actually go a little bit fancier. There's this idea of second-order approximation, where we take into account both first derivative and second derivative information. Now we make a second-order Taylor approximation to our function and kind of locally approximate our function with a quadratic. Now with a quadratic, you can step right to the minimum, and you're really happy. That's this idea of second-order optimization. When you generalize this to multiple dimensions, you get something called the Newton step, where you compute this Hessian matrix, which is a matrix of second derivatives, and you end up inverting this Hessian matrix in order to step directly to the minimum of this quadratic approximation to your function. Does anyone spot something that's quite different about this update rule, compared to the other ones that we've seen? - [Student] [speaks too low to hear] - This doesn't have a learning rate. That's kind of cool. We're making this quadratic approximation and we're stepping right to the minimum of the quadratic. At least in this vanilla version of Newton's method, you don't actually need a learning rate. You just always step to the minimum at every time step. However, in practice, you might end up, have a learning rate anyway because, again, that quadratic approximation might not be perfect, so you might only want to step in the direction towards the minimum, rather than actually stepping to the minimum, but at least in this vanilla version, it doesn't have a learning rate. But unfortunately, this is maybe a little bit impractical for deep learning because this Hessian matrix is N by N, where N is the number of parameters in your network. If N is 100 million, then 100 million squared is way too big. You definitely can't store that in memory, and you definitely can't invert it. In practice, people sometimes use these quasi-Newton methods that, rather than working with the full Hessian and inverting the full Hessian, they work with approximations. Low-rank approximations are common. You'll sometimes see these for some problems. L-BFGS is one particular second-order optimizer that has this approximate second, keeps this approximation of the Hessian that you'll sometimes see, but in practice, it doesn't work too well for many deep learning problems because these approximations, these second-order approximations, don't really handle the stochastic case very much, very nicely. They also tend not to work so well with non-convex problems. I don't want to get into that right now too much. In practice, what you should really do is probably Adam is a really good choice for many different neural network things, but if you're in a situation where you can afford to do full batch updates, and you know that your problem doesn't have really any stochasticity, then L-BFGS is kind of a good choice. L-BFGS doesn't really get used for training neural networks too much, but as we'll see in a couple of lectures, it does sometimes get used for things like style transfer, where you actually have less stochasticity and fewer parameters, but you still want to solve an optimization problem. All of these strategies we've talked about so far are about reducing training error. All these optimization algorithms are really about driving down your training error and minimizing your objective function, but we don't really care about training error that much. Instead, we really care about our performance on unseen data. We really care about reducing this gap between train and test error. The question is once we're already good at optimizing our objective function, what can we do to try to reduce this gap and make our model perform better on unseen data? One really quick and dirty, easy thing to try is this idea of model ensembles that sometimes works across many different areas in machine learning. The idea is pretty simple. Rather than having just one model, we'll train 10 different models independently from different initial random restarts. Now at test time, we'll run our data through all of the 10 models and average the predictions of those 10 models. Adding these multiple models together tends to reduce overfitting a little bit and tend to improve performance a little bit, typically by a couple percent. This is generally not a drastic improvement, but it is a consistent improvement. You'll see that in competitions, like ImageNet and other things like that, using model ensembles is very common to get maximal performance. You can actually get a little bit creative with this. Sometimes rather than training separate models independently, you can just keep multiple snapshots of your model during the course of training, and then use these as your ensembles. Then you still, at test time, need to average the predictions of these multiple snapshots, but you can collect the snapshots during the course of training. There's actually a very nice paper being presented at ICLR this week that kind of has a fancy version of this idea, where we use a crazy learning rate schedule, where our learning rate goes very slow, then very fast, then very slow, then very fast. The idea is that with this crazy learning rate schedule, then over the course of training, the model might be able to converge to different regions in the objective landscape that all are reasonably good. If you do an ensemble over these different snapshots, then you can improve your performance quite nicely, even though you're only training the model once. Questions? - [Student] [speaks too low to hear] - The question is, it's bad when there's a large gap between error 'cause that means you're overfitting, but if there's no gap, then is that also maybe bad? Do we actually want some small, optimal gap between the two? We don't really care about the gap. What we really care about is maximizing the performance on the validation set. What tends to happen is that if you don't see a gap, then you could have improved your absolute performance, in many cases, by overfitting a little bit more. There's this weird correlation between the absolute performance on the validation set and the size of that gap. We only care about absolute performance. Question in the back? - [Student] Are hyperparameters the same for the ensemble? - Are the hyperparameters the same for the ensembles? That's a good question. Sometimes they're not. You might want to try different sizes of the model, different learning rates, different regularization strategies and ensemble across these different things. That actually does happen sometimes. Another little trick you can do sometimes is that during training, you might actually keep an exponentially decaying average of your parameter vector itself to kind of have a smooth ensemble of your own network during training. Then use this smoothly decaying average of your parameter vector, rather than the actual checkpoints themselves. This is called Polyak averaging, and it sometimes helps a little bit. It's just another one of these small tricks you can sometimes add, but it's not maybe too common in practice. Another question you might have is that how can we actually improve the performance of single models? When we have ensembles, we still need to run, like, 10 models at test time. That's not so great. We really want some strategies to improve the performance of our single models. That's really this idea of regularization, where we add something to our model to prevent it from fitting the training data too well in the attempts to make it perform better on unseen data. We've seen a couple ideas, a couple methods for regularization already, where we add some explicit extra term to the loss. Where we have this one term telling the model to fit the data, and another term that's a regularization term. You saw this in homework one, where we used L2 regularization. As we talked about in lecture a couple lectures ago, this L2 regularization doesn't really make maybe a lot of sense in the context of neural networks. Sometimes we use other things for neural networks. One regularization strategy that's super, super common for neural networks is this idea of dropout. Dropout is super simple. Every time we do a forward pass through the network, at every layer, we're going to randomly set some neurons to zero. Every time we do a forward pass, we'll set a different random subset of the neurons to zero. This kind of proceeds one layer at a time. We run through one layer, we compute the value of the layer, we randomly set some of them to zero, and then we continue up through the network. Now if you look at this fully connected network on the left versus a dropout version of the same network on the right, you can see that after we do dropout, it kind of looks like a smaller version of the same network, where we're only using some subset of the neurons. This subset that we use varies at each iteration, at each forward pass. Question? - [Student] [speaks too low to hear] - The question is what are we setting to zero? It's the activations. Each layer is computing previous activation times the weight matrix gives you our next activation. Then you just take that activation, set some of them to zero, and then your next layer will be partially zeroed activations times another matrix give you your next activations. Question? - [Student] [speaks too low to hear] - Question is which layers do you do this on? It's more common in fully connected layers, but you sometimes see this in convolutional layers, as well. When you're working in convolutional layers, sometimes instead of dropping each activation randomly, instead you sometimes might drop entire feature maps randomly. In convolutions, you have this channel dimension, and you might drop out entire channels, rather than random elements. Dropout is kind of super simple in practice. It only requires adding two lines, one line per dropout call. Here we have a three-layer neural network, and we've added dropout. You can see that all we needed to do was add this extra line where we randomly set some things to zero. This is super easy to implement. But the question is why is this even a good idea? We're seriously messing with the network at training time by setting a bunch of its values to zero. How can this possibly make sense? One sort of slightly hand wavy idea that people have is that dropout helps prevent co-adaptation of features. Maybe if you imagine that we're trying to classify cats, maybe in some universe, the network might learn one neuron for having an ear, one neuron for having a tail, one neuron for the input being furry. Then it kind of combines these things together to decide whether or not it's a cat. But now if we have dropout, then in making the final decision about catness, the network cannot depend too much on any of these one features. Instead, it kind of needs to distribute its idea of catness across many different features. This might help prevent overfitting somehow. Another interpretation of dropout that's come out a little bit more recently is that it's kind of like doing model ensembling within a single model. If you look at the picture on the left, after you apply dropout to the network, we're kind of computing this subnetwork using some subset of the neurons. Now every different potential dropout mask leads to a different potential subnetwork. Now dropout is kind of learning a whole ensemble of networks all at the same time that all share parameters. By the way, because of the number of potential dropout masks grows exponentially in the number of neurons, you're never going to sample all of these things. This is really a gigantic, gigantic ensemble of networks that are all being trained simultaneously. Then the question is what happens at test time? Once we move to dropout, we've kind of fundamentally changed the operation of our neural network. Previously, we've had our neural network, f, be a function of the weights, w, and the inputs, x, and then produce the output, y. But now, our network is also taking this additional input, z, which is some random dropout mask. That z is random. Having randomness at test time is maybe bad. Imagine that you're working at Facebook, and you want to classify the images that people are uploading. Then today, your image gets classified as a cat, and tomorrow it doesn't. That would be really weird and really bad. You'd probably want to eliminate this stochasticity at test time once the network is already trained. Then we kind of want to average out this randomness. If you write this out, you can imagine actually marginalizing out this randomness with some integral, but in practice, this integral is totally intractable. We don't know how to evaluate this thing. You're in bad shape. One thing you might imagine doing is approximating this integral via sampling, where you draw multiple samples of z and then average them out at test time, but this still would introduce some randomness, which is little bit bad. Thankfully, in the case of dropout, we can actually approximate this integral in kind of a cheap way locally. If we consider a single neuron, the output is a, the inputs are x and y, with two weights, w one, w two. Then at test time, our value a is just w one x plus w two y. Now imagine that we trained to this network. During training, we used dropout with probability 1/2 of dropping our neurons. Now the expected value of a during training, we can kind of compute analytically for this small case. There's four possible dropout masks, and we're going to average out the values across these four masks. We can see that the expected value of a during training is 1/2 w one x plus w two y. There's this disconnect between this average value of w one x plus w two y at test time, and at training time, the average value is only 1/2 as much. One cheap thing we can do is that at test time, we don't have any stochasticity. Instead, we just multiply this output by the dropout probability. Now these expected values are the same. This is kind of like a local cheap approximation to this complex integral. This is what people really commonly do in practice with dropout. At dropout, we have this predict function, and we just multiply our outputs of the layer by the dropout probability. The summary of dropout is that it's really simple on the forward pass. You're just adding two lines to your implementation to randomly zero out some nodes. Then at the test time prediction function, you just added one little multiplication by your probability. Dropout is super simple. It tends to work well sometimes for regularizing neural networks. By the way, one common trick you see sometimes is this idea of inverted dropout. Maybe at test time, you care more about efficiency, so you want to eliminate that extra multiplication by p at test time. Then what you can do is, at test time, you use the entire weight matrix, but now at training time, instead you divide by p because training is probably happening on a GPU. You don't really care if you do one extra multiply at training time, but then at test time, you kind of want this thing to be as efficient as possible. Question? - [Student] [speaks too low to hear] Now the gradient [speaks too low to hear]. - The question is what happens to the gradient during training with dropout? You're right. We only end up propagating the gradients through the nodes that were not dropped. This has the consequence that when you're training with dropout, typically training takes longer because at each step, you're only updating some subparts of the network. When you're using dropout, it typically takes longer to train, but you might have a better generalization after it's converged. Dropout, we kind of saw is like this one concrete instantiation. There's a little bit more general strategy for regularization where during training we add some kind of randomness to the network to prevent it from fitting the training data too well. To kind of mess it up and prevent it from fitting the training data perfectly. Now at test time, we want to average out all that randomness to hopefully improve our generalization. Dropout is probably the most common example of this type of strategy, but actually batch normalization kind of fits this idea, as well. Remember in batch normalization, during training, one data point might appear in different mini batches with different other data points. There's a bit of stochasticity with respect to a single data point with how exactly that point gets normalized during training. But now at test time, we kind of average out this stochasticity by using some global estimates to normalize, rather than the per mini batch estimates. Actually batch normalization tends to have kind of a similar regularizing effect as dropout because they both introduce some kind of stochasticity or noise at training time, but then average it out at test time. Actually, when you train networks with batch normalization, sometimes you don't use dropout at all, and just the batch normalization adds enough of a regularizing effect to your network. Dropout is somewhat nice because you can actually tune the regularization strength by varying that parameter p, and there's no such control in batch normalization. Another kind of strategy that fits in this paradigm is this idea of data augmentation. During training, in a vanilla version for training, we have our data, we have our label. We use it to update our CNN at each time step. But instead, what we can do is randomly transform the image in some way during training such that the label is preserved. Now we train on these random transformations of the image rather than the original images. Sometimes you might see random horizontal flips 'cause if you take a cat and flip it horizontally, it's still a cat. You'll randomly sample crops of different sizes from the image because the random crop of the cat is still a cat. Then during testing, you kind of average out this stochasticity by evaluating with some fixed set of crops, often the four corners and the middle and their flips. What's very common is that when you read, for example, papers on ImageNet, they'll report a single crop performance of their model, which is just like the whole image, and a 10 crop performance of their model, which are these five standard crops plus their flips. Also with data augmentation, you'll sometimes use color jittering, where you might randomly vary the contrast or brightness of your image during training. You can get a little bit more complex with color jittering, as well, where you try to make color jitters that are maybe in the PCA directions of your data space or whatever, where you do some color jittering in some data-dependent way, but that's a little bit less common. In general, data augmentation is this really general thing that you can apply to just about any problem. Whatever problem you're trying to solve, you kind of think about what are the ways that I can transform my data without changing the label? Now during training, you just apply these random transformations to your input data. This sort of has a regularizing effect on the network because you're, again, adding some kind of stochasticity during training, and then marginalizing it out at test time. Now we've seen three examples of this pattern, dropout, batch normalization, data augmentation, but there's many other examples, as well. Once you have this pattern in your mind, you'll kind of recognize this thing as you read other papers sometimes. There's another kind of related idea to dropout called DropConnect. With DropConnect, it's the same idea, but rather than zeroing out the activations at every forward pass, instead we randomly zero out some of the values of the weight matrix instead. Again, it kind of has this similar flavor. Another kind of cool idea that I like, this one's not so commonly used, but I just think it's a really cool idea, is this idea of fractional max pooling. Normally when you do two-by-two max pooling, you have these fixed two-by-two regions over which you pool over in the forward pass, but now with fractional max pooling, every time we have our pooling layer, we're going to randomize exactly the pool that the regions over which we pool. Here in the example on the right, I've shown three different sets of random pooling regions that you might see during training. Now during test time, you kind of average the stochasticity out by trying many different, by either sticking to some fixed set of pooling regions. or drawing many samples and averaging over them. That's kind of a cool idea, even though it's not so commonly used. Another really kind of surprising paper in this paradigm that actually came out in the last year, so this is new since the last time we taught the class, is this idea of stochastic depth. Here we have a network on the left. The idea is that we have a very deep network. We're going to randomly drop layers from the network during training. During training, we're going to eliminate some layers and only use some subset of the layers during training. Now during test time, we'll use the whole network. This is kind of crazy. It's kind of amazing that this works, but this tends to have kind of a similar regularizing effect as dropout and these other strategies. But again, this is super, super cutting-edge research. This is not super commonly used in practice, but it is a cool idea. Any last minute questions about regularization? No? Use it. It's a good idea. Yeah? - [Student] [speaks too low to hear] - The question is do you usually use more than one regularization method? You should generally be using batch normalization as kind of a good thing to have in most networks nowadays because it helps you converge, especially for very deep things. In many cases, batch normalization alone tends to be enough, but then sometimes if batch normalization alone is not enough, then you can consider adding dropout or other thing once you see your network overfitting. You generally don't do a blind cross-validation over these things. Instead, you add them in in a targeted way once you see your network is overfitting. One quick thing, it's this idea of transfer learning. We've kind of seen with regularization, we can help reduce the gap between train and test error by adding these different regularization strategies. One problem with overfitting is sometimes you overfit 'cause you don't have enough data. You want to use a big, powerful model, but that big, powerful model just is going to overfit too much on your small dataset. Regularization is one way to combat that, but another way is through using transfer learning. Transfer learning kind of busts this myth that you don't need a huge amount of data in order to train a CNN. The idea is really simple. You'll maybe first take some CNN. Here is kind of a VGG style architecture. You'll take your CNN, you'll train it in a very large dataset, like ImageNet, where you actually have enough data to train the whole network. Now the idea is that you want to apply the features from this dataset to some small dataset that you care about. Maybe instead of classifying the 1,000 ImageNet categories, now you want to classify 10 dog breeds or something like that. You only have a small dataset. Here, our small dataset only has C classes. Then what you'll typically do is for this last fully connected layer that is going from the last layer features to the final class scores, this now, you need to reinitialize that matrix randomly. For ImageNet, it was a 4,096-by-1,000 dimensional matrix. Now for your new classes, it might be 4,096-by-C or by 10 or whatever. You reinitialize this last matrix randomly, freeze the weights of all the previous layers and now just basically train a linear classifier, and only train the parameters of this last layer and let it converge on your data. This tends to work pretty well if you only have a very small dataset to work with. Now if you have a little bit more data, another thing you can try is actually fine tuning the whole network. After that top layer converges and after you learn that last layer for your data, then you can consider actually trying to update the whole network, as well. If you have more data, then you might consider updating larger parts of the network. A general strategy here is that when you're updating the network, you want to drop the learning rate from its initial learning rate because probably the original parameters in this network that converged on ImageNet probably worked pretty well generally, and you just want to change them a very small amount to tune performance for your dataset. Then when you're working with transfer learning, you kind of imagine this two-by-two grid of scenarios where on the one side, you have maybe very small amounts of data for your dataset, or very large amount of data for your dataset. Then maybe your data is very similar to images. Like, ImageNet has a lot of pictures of animals and plants and stuff like that. If you want to just classify other types of animals and plants and other types of images like that, then you're in pretty good shape. Then generally what you do is if your data is very similar to something like ImageNet, if you have a very small amount of data, you can just basically train a linear classifier on top of features, extracted using an ImageNet model. If you have a little bit more data to work with, then you might imagine fine tuning your data. However, you sometimes get in trouble if your data looks very different from ImageNet. Maybe if you're working with maybe medical images that are X-rays or CAT scans or something that looks very different from images in ImageNet, in that case, you maybe need to get a little bit more creative. Sometimes it still works well here, but those last layer features might not be so informative. You might consider reinitializing larger parts of the network and getting a little bit more creative and trying more experiments here. This is somewhat mitigated if you have a large amount of data in your very different dataset 'cause then you can actually fine tune larger parts of the network. Another point I'd like to make is this idea of transfer learning is super pervasive. It's actually the norm, rather than the exception. As you read computer vision papers, you'll often see system diagrams like this for different tasks. On the left, we're working with object detection. On the right, we're working with image captioning. Both of these models have a CNN that's kind of processing the image. In almost all applications of computer vision these days, most people are not training these things from scratch. Almost always, that CNN will be pretrained on ImageNet, and then potentially fine tuned for the task at hand. Also, in the captioning sense, sometimes you can actually pretrain some word vectors relating to the language, as well. You maybe pretrain the CNN on ImageNet, pretrain some word vectors on a large text corpus, and then fine tune the whole thing for your dataset. Although in the case of captioning, I think this pretraining with word vectors tends to be a little bit less common and a little bit less critical. The takeaway for your projects, and more generally as you work on different models, is that whenever you have some large dataset, whenever you have some problem that you want to tackle, but you don't have a large dataset, then what you should generally do is download some pretrained model that's relatively close to the task you care about, and then either reinitialize parts of that model or fine tune that model for your data. That tends to work pretty well, even if you have only a modest amount of training data to work with. Because this is such a common strategy, all of the different deep learning software packages out there provide a model zoo where you can just download pretrained versions of various models. In summary today, we talked about optimization, which is about how to improve the training loss. We talked about regularization, which is improving your performance on the test data. Model ensembling kind of fit into there. We also talked about transfer learning, which is how you can actually do better with less data. These are all super useful strategies. You should use them in your projects and beyond. Next time, we'll talk more concretely about some of the different deep learning software packages out there.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_2_Image_Classification.txt
Okay, so welcome to lecture two of CS231N. On Tuesday we, just recall, we, sort of, gave you the big picture view of what is computer vision, what is the history, and a little bit of the overview of the class. And today, we're really going to dive in, for the first time, into the details. And we'll start to see, in much more depth, exactly how some of these learning algorithms actually work in practice. So, the first lecture of the class is probably, sort of, the largest big picture vision. And the majority of the lectures in this class will be much more detail orientated, much more focused on the specific mechanics, of these different algorithms. So, today we'll see our first learning algorithm and that'll be really exciting, I think. But, before we get to that, I wanted to talk about a couple of administrative issues. One, is Piazza. So, I saw it when I checked yesterday, it seemed like we had maybe 500 students signed up on Piazza. Which means that there are several hundred of you who are not yet there. So, we really want Piazza to be the main source of communication between the students and the core staff. So, we've gotten a lot of questions to the staff list about project ideas or questions about midterm attendance or poster session attendance. And, any, sort of, questions like that should really go to Piazza. You'll probably get answers to your questions faster on Piazza, because all the TAs are knowing to check that. And it's, sort of, easy for emails to get lost in the shuffle if you just send to the course list. It's also come to my attention that some SCPD students are having a bit of a hard time signing up for Piazza. SCPD students are supposed to receive a @stanford.edu email address. So, once you get that email address, then you can use the Stanford email to sign into Piazza. Probably that doesn't affect those of you who are sitting in the room right now, but, for those students listening on SCPD. The next administrative issue is about assignment one. Assignment one will be up later today, probably sometime this afternoon, but I promise, before I go to sleep tonight, it'll be up. But, if you're getting a little bit antsy and really want to start working on it right now, then you can look at last year's version of assignment one. It'll be pretty much the same content. We're just reshuffling it a little bit to make it, like, for example, upgrading to work with Python 3, rather than Python 2.7. And some of these minor cosmetic changes, but the content of the assignment will still be the same as last year. So, in this assignment you'll be implementing your own k-nearest neighbor classifier, which we're going to talk about in this lecture. You'll also implement several different linear classifiers, including the SVM and Softmax, as well as a simple two-layer neural network. And we'll cover all this content over the next couple of lectures. So, all of our assignments are using Python and NumPy. If you aren't familiar with Python or NumPy, then we have written a tutorial that you can find on the course website to try and get you up to speed. But, this is, actually, pretty important. NumPy lets you write these very efficient vectorized operations that let you do quite a lot of computation in just a couple lines of code. So this is super important for pretty much all aspects of numerical computing and machine learning and everything like that, is efficiently implementing these vectorized operations. And you'll get a lot of practice with this on the first assignment. So, for those of you who don't have a lot of experience with Matlab or NumPy or other types of vectorized tensor computation, I recommend that you start looking at this assignment pretty early and also, read carefully through the tutorial. The other thing I wanted to talk about is that we're happy to announce that we're officially supported through Google Cloud for this class. So, Google Cloud is somewhat similar to Amazon AWS. You can go and start virtual machines up in the cloud. These virtual machines can have GPUs. We're working on the tutorial for exactly how to use Google Cloud and get it to work for the assignments. But our intention is that you'll be able to just download some image, and it'll be very seamless for you to work on the assignments on one of these instances on the cloud. And because Google has, very generously, supported this course, we'll be able to distribute to each of you coupons that let you use Google Cloud credits for free for the class. So you can feel free to use these for the assignments and also for the course projects when you want to start using GPUs and larger machines and whatnot. So, we'll post more details about that, probably, on Piazza later today. But, I just wanted to mention, because I know there had been a couple of questions about, can I use my laptop? Do I have to run on corn? Do I have to, whatever? And the answer is that, you'll be able to run on Google Cloud and we'll provide you some coupons for that. Yeah, so, those are, kind of, the major administrative issues I wanted to talk about today. And then, let's dive into the content. So, the last lecture we talked a little bit about this task of image classification, which is really a core task in computer vision. And this is something that we'll really focus on throughout the course of the class. Is, exactly, how do we work on this image classification task? So, a little bit more concretely, when you're doing image classification, your system receives some input image, which is this cute cat in this example, and the system is aware of some predetermined set of categories or labels. So, these might be, like, a dog or a cat or a truck or a plane, and there's some fixed set of category labels, and the job of the computer is to look at the picture and assign it one of these fixed category labels. This seems like a really easy problem, because so much of your own visual system in your brain is hardwired to doing these, sort of, visual recognition tasks. But this is actually a really, really hard problem for a machine. So, if you dig in and think about, actually, what does a computer see when it looks at this image, it definitely doesn't get this holistic idea of a cat that you see when you look at it. And the computer really is representing the image as this gigantic grid of numbers. So, the image might be something like 800 by 600 pixels. And each pixel is represented by three numbers, giving the red, green, and blue values for that pixel. So, to the computer, this is just a gigantic grid of numbers. And it's very difficult to distill the cat-ness out of this, like, giant array of thousands, or whatever, very many different numbers. So, we refer to this problem as the semantic gap. This idea of a cat, or this label of a cat, is a semantic label that we're assigning to this image, and there's this huge gap between the semantic idea of a cat and these pixel values that the computer is actually seeing. And this is a really hard problem because you can change the picture in very small, subtle ways that will cause this pixel grid to change entirely. So, for example, if we took this same cat, and if the cat happened to sit still and not even twitch, not move a muscle, which is never going to happen, but we moved the camera to the other side, then every single grid, every single pixel, in this giant grid of numbers would be completely different. But, somehow, it's still representing the same cat. And our algorithms need to be robust to this. But, not only viewpoint is one problem, another is illumination. There can be different lighting conditions going on in the scene. Whether the cat is appearing in this very dark, moody scene, or like is this very bright, sunlit scene, it's still a cat, and our algorithms need to be robust to that. Objects can also deform. I think cats are, maybe, among the more deformable of animals that you might see out there. And cats can really assume a lot of different, varied poses and positions. And our algorithms should be robust to these different kinds of transforms. There can also be problems of occlusion, where you might only see part of a cat, like, just the face, or in this extreme example, just a tail peeking out from under the couch cushion. But, in these cases, it's pretty easy for you, as a person, to realize that this is probably a cat, and you still recognize these images as cats. And this is something that our algorithms also must be robust to, which is quite difficult, I think. There can also be problems of background clutter, where maybe the foreground object of the cat, could actually look quite similar in appearance to the background. And this is another thing that we need to handle. There's also this problem of intraclass variation, that this one notion of cat-ness, actually spans a lot of different visual appearances. And cats can come in different shapes and sizes and colors and ages. And our algorithm, again, needs to work and handle all these different variations. So, this is actually a really, really challenging problem. And it's sort of easy to forget how easy this is because so much of your brain is specifically tuned for dealing with these things. But now if we want our computer programs to deal with all of these problems, all simultaneously, and not just for cats, by the way, but for just about any object category you can imagine, this is a fantastically challenging problem. And it's, actually, somewhat miraculous that this works at all, in my opinion. But, actually, not only does it work, but these things work very close to human accuracy in some limited situations. And take only hundreds of milliseconds to do so. So, this is some pretty amazing, incredible technology, in my opinion, and over the course of the rest of the class we will really see what kinds of advancements have made this possible. So now, if you, kind of, think about what is the API for writing an image classifier, you might sit down and try to write a method in Python like this. Where you want to take in an image and then do some crazy magic and then, eventually, spit out this class label to say cat or dog or whatnot. And there's really no obvious way to do this, right? If you're taking an algorithms class and your task is to sort numbers or compute a convex hull or, even, do something like RSA encryption, you, sort of, can write down an algorithm and enumerate all the steps that need to happen in order for this things to work. But, when we're trying to recognize objects, or recognize cats or images, there's no really clear, explicit algorithm that makes intuitive sense, for how you might go about recognizing these objects. So, this is, again, quite challenging, if you think about, if it was your first day programming and you had to sit down and write this function, I think most people would be in trouble. That being said, people have definitely made explicit attempts to try to write, sort of, high-end coded rules for recognizing different animals. So, we touched on this a little bit in the last lecture, but maybe one idea for cats is that, we know that cats have ears and eyes and mouths and noses. And we know that edges, from Hubel and Wiesel, we know that edges are pretty important when it comes to visual recognition. So one thing we might try to do is compute the edges of this image and then go in and try to categorize all the different corners and boundaries, and say that, if we have maybe three lines meeting this way, then it might be a corner, and an ear has one corner here and one corner there and one corner there, and then, kind of, write down this explicit set of rules for recognizing cats. But this turns out not to work very well. One, it's super brittle. And, two, say, if you want to start over for another object category, and maybe not worry about cats, but talk about trucks or dogs or fishes or something else, then you need to start all over again. So, this is really not a very scalable approach. We want to come up with some algorithm, or some method, for these recognition tasks which scales much more naturally to all the variety of objects in the world. So, the insight that, sort of, makes this all work is this idea of the data-driven approach. Rather than sitting down and writing these hand-specified rules to try to craft exactly what is a cat or a fish or what have you, instead, we'll go out onto the internet and collect a large dataset of many, many cats and many, many airplanes and many, many deer and different things like this. And we can actually use tools like Google Image Search, or something like that, to go out and collect a very large number of examples of these different categories. By the way, this actually takes quite a lot of effort to go out and actually collect these datasets but, luckily, there's a lot of really good, high quality datasets out there already for you to use. Then once we get this dataset, we train this machine learning classifier that is going to ingest all of the data, summarize it in some way, and then spit out a model that summarizes the knowledge of how to recognize these different object categories. Then finally, we'll use this training model and apply it on new images that will then be able to recognize cats and dogs and whatnot. So here our API has changed a little bit. Rather than a single function that just inputs an image and recognizes a cat, we have these two functions. One, called, train, that's going to input images and labels and then output a model, and then, separately, another function called, predict, which will input the model and than make predictions for images. And this is, kind of, the key insight that allowed all these things to start working really well over the last 10, 20 years or so. So, this class is primarily about neural networks and convolutional neural networks and deep learning and all that, but this idea of a data-driven approach is much more general than just deep learning. And I think it's useful to, sort of, step through this process for a very simple classifier first, before we get to these big, complex ones. So, probably, the simplest classifier you can imagine is something we call nearest neighbor. The algorithm is pretty dumb, honestly. So, during the training step we won't do anything, we'll just memorize all of the training data. So this is very simple. And now, during the prediction step, we're going to take some new image and go and try to find the most similar image in the training data to that new image, and now predict the label of that most similar image. A very simple algorithm. But it, sort of, has a lot of these nice properties with respect to data-drivenness and whatnot. So, to be a little bit more concrete, you might imagine working on this dataset called CIFAR-10, which is very commonly used in machine learning, as kind of a small test case. And you'll be working with this dataset on your homework. So, the CIFAR-10 dataset gives you 10 different classes, airplanes and automobiles and birds and cats and different things like that. And for each of those 10 categories it provides 50,000 training images, roughly evenly distributed across these 10 categories. And then 10,000 additional testing images that you're supposed to test your algorithm on. So here's an example of applying this simple nearest neighbor classifier to some of these test images on CIFAR-10. So, on this grid on the right, for the left most column, gives a test image in the CIFAR-10 dataset. And now on the right, we've sorted the training images and show the most similar training images to each of these test examples. And you can see that they look kind of visually similar to the training images, although they are not always correct, right? So, maybe on the second row, we see that the testing, this is kind of hard to see, because these images are 32 by 32 pixels, you need to really dive in there and try to make your best guess. But, this image is a dog and it's nearest neighbor is also a dog, but this next one, I think is actually a deer or a horse or something else. But, you can see that it looks quite visually similar, because there's kind of a white blob in the middle and whatnot. So, if we're applying the nearest neighbor algorithm to this image, we'll find the closest example in the training set. And now, the closest example, we know it's label, because it comes from the training set. And now, we'll simply say that this testing image is also a dog. You can see from these examples that is probably not going to work very well, but it's still kind of a nice example to work through. But then, one detail that we need to know is, given a pair of images, how can we actually compare them? Because, if we're going to take our test image and compare it to all the training images, we actually have many different choices for exactly what that comparison function should look like. So, in the example in the previous slide, we've used what's called the L1 distance, also sometimes called the Manhattan distance. So, this is a really sort of simple, easy idea for comparing images. And that's that we're going to just compare individual pixels in these images. So, supposing that our test image is maybe just a tiny four by four image of pixel values, then we're take this upper-left hand pixel of the test image, subtract off the value in the training image, take the absolute value, and get the difference in that pixel between the two images. And then, sum all these up across all the pixels in the image. So, this is kind of a stupid way to compare images, but it does some reasonable things sometimes. But, this gives us a very concrete way to measure the difference between two images. And in this case, we have this difference of 456 between these two images. So, here's some full Python code for implementing this nearest neighbor classifier and you can see it's pretty short and pretty concise because we've made use of many of these vectorized operations offered by NumPy. So, here we can see that this training function, that we talked about earlier, is, again, very simple, in the case of nearest neighbor, you just memorize the training data, there's not really much to do here. And now, at test time, we're going to take in our image and then go in and compare using this L1 distance function, our test image to each of these training examples and find the most similar example in the training set. And you can see that, we're actually able to do this in just one or two lines of Python code by utilizing these vectorized operations in NumPy. So, this is something that you'll get practice with on the first assignment. So now, a couple questions about this simple classifier. First, if we have N examples in our training set, then how fast can we expect training and testing to be? Well, training is probably constant because we don't really need to do anything, we just need to memorize the data. And if you're just copying a pointer, that's going to be constant time no matter how big your dataset is. But now, at test time we need to do this comparison stop and compare our test image to each of the N training examples in the dataset. And this is actually quite slow. So, this is actually somewhat backwards, if you think about it. Because, in practice, we want our classifiers to be slow at training time and then fast at testing time. Because, you might imagine, that a classifier might go and be trained in a data center somewhere and you can afford to spend a lot of computation at training time to make the classifier really good. But then, when you go and deploy the classifier at test time, you want it to run on your mobile phone or in a browser or some other low power device, and you really want the testing time performance of your classifier to be quite fast. So, from this perspective, this nearest neighbor algorithm, is, actually, a little bit backwards. And we'll see that once we move to convolutional neural networks, and other types of parametric models, they'll be the reverse of this. Where you'll spend a lot of compute at training time, but then they'll be quite fast at testing time. So then, the question is, what exactly does this nearest neighbor algorithm look like when you apply it in practice? So, here we've drawn, what we call the decision regions of a nearest neighbor classifier. So, here our training set consists of these points in the two dimensional plane, where the color of the point represents the category, or the class label, of that point. So, here we see we have five classes and some blue ones up in the corner here, some purple ones in the upper-right hand corner. And now for each pixel in this entire plane, we've gone and computed what is the nearest example in these training data, and then colored the point of the background corresponding to what is the class label. So, you can see that this nearest neighbor classifier is just sort of carving up the space and coloring the space according to the nearby points. But this classifier is maybe not so great. And by looking at this picture we can start to see some of the problems that might come out with a nearest neighbor classifier. For one, this central region actually contains mostly green points, but one little yellow point in the middle. But because we're just looking at the nearest neighbor, this causes a little yellow island to appear in this middle of this green cluster. And that's, maybe, not so great. Maybe those points actually should have been green. And then, similarly we also see these, sort of, fingers, like the green region pushing into the blue region, again, due to the presence of one point, which may have been noisy or spurious. So, this kind of motivates a slight generalization of this algorithm called k-nearest neighbors. So rather than just looking for the single nearest neighbor, instead we'll do something a little bit fancier and find K of our nearest neighbors, according to our distance metric, and then take a vote among each of our neighbors. And then predict the majority vote among our neighbors. You can imagine slightly more complex ways of doing this. Maybe you'd vote weighted on the distance, or something like that, but the simplest thing that tends to work pretty well is just taking a majority vote. So here we've shown the exact same set of points using this K=1 nearest neighbor classifier, as well as K=3 and K=5 in the middle and on the right. And once we move to K=3, you can see that that spurious yellow point in the middle of the green cluster is no longer causing the points near that region to be classified as yellow. Now this entire green portion in the middle is all being classified as green. You can also see that these fingers of the red and blue regions are starting to get smoothed out due to this majority voting. And then, once we move to the K=5 case, then these decision boundaries between the blue and red regions have become quite smooth and quite nice. So, generally when you're using nearest neighbors classifiers, you almost always want to use some value of K, which is larger than one because this tends to smooth out your decision boundaries and lead to better results. Question? [student asking a question] Yes, so the question is, what is the deal with these white regions? The white regions are where there was no majority among the k-nearest neighbors. You could imagine maybe doing something slightly fancier and maybe taking a guess or randomly selecting among the majority winners, but for this simple example we're just coloring it white to indicate there was no nearest neighbor in those points. Whenever we're thinking about computer vision I think it's really useful to kind of flip back and forth between several different viewpoints. One, is this idea of high dimensional points in the plane, and then the other is actually looking at concrete images. Because the pixels of the image actually allow us to think of these images as high dimensional vectors. And it's sort of useful to ping pong back and forth between these two different viewpoints. So then, sort of taking this k-nearest neighbor and going back to the images you can see that it's actually not very good. Here I've colored in red and green which images would actually be classified correctly or incorrectly according to their nearest neighbor. And you can see that it's really not very good. But maybe if we used a larger value of K then this would involve actually voting among maybe the top three or the top five or maybe even the whole row. And you could imagine that that would end up being a lot more robust to some of this noise that we see when retrieving neighbors in this way. So another choice we have when we're working with the k-nearest neighbor algorithm is determining exactly how we should be comparing our different points. For the examples so far we've just shown we've talked about this L1 distance which takes the sum of the absolute values between the pixels. But another common choice is the L2 or Euclidean distance where you take the square root of the sum of the squares and take this as your distance. Choosing different distance metrics actually is a pretty interesting topic because different distance metrics make different assumptions about the underlying geometry or topology that you'd expect in the space. So, this L1 distance, underneath this, this is actually a circle according to the L1 distance and it forms this square shape thing around the origin. Where each of the points on this, on the square, is equidistant from the origin according to L1, whereas with the L2 or Euclidean distance then this circle is a familiar circle, it looks like what you'd expect. So one interesting thing to point out between these two metrics in particular, is that the L1 distance depends on your choice of coordinates system. So if you were to rotate the coordinate frame that would actually change the L1 distance between the points. Whereas changing the coordinate frame in the L2 distance doesn't matter, it's the same thing no matter what your coordinate frame is. Maybe if your input features, if the individual entries in your vector have some important meaning for your task, then maybe somehow L1 might be a more natural fit. But if it's just a generic vector in some space and you don't know which of the different elements, you don't know what they actually mean, then maybe L2 is slightly more natural. And another point here is that by using different distance metrics we can actually generalize the k-nearest neighbor classifier to many, many different types of data, not just vectors, not just images. So, for example, imagine you wanted to classify pieces of text, then the only thing you need to do to use k-nearest neighbors is to specify some distance function that can measure distances between maybe two paragraphs or two sentences or something like that. So, simply by specifying different distance metrics we can actually apply this algorithm very generally to basically any type of data. Even though it's a kind of simple algorithm, in general, it's a very good thing to try first when you're looking at a new problem. So then, it's also kind of interesting to think about what is actually happening geometrically if we choose different distance metrics. So here we see the same set of points on the left using the L1, or Manhattan distance, and then, on the right, using the familiar L2, or Euclidean distance. And you can see that the shapes of these decision boundaries actually change quite a bit between the two metrics. So when you're looking at L1 these decision boundaries tend to follow the coordinate axes. And this is again because the L1 depends on our choice of coordinate system. Where the L2 sort of doesn't really care about the coordinate axis, it just puts the boundaries where they should fall naturally. My confession is that each of these examples that I've shown you is actually from this interactive web demo that I built, where you can go and play with this k-nearest neighbor classifier on your own. And this is really hard to work on a projector screen. So maybe we'll do that on your own time. So, let's just go back to here. Man, this is kind of embarrassing. Okay, that was way more trouble than it was worth. So, let's skip this, but I encourage you to go play with this in your browser. It's actually pretty fun and kind of nice to build intuition about how the decision boundary changes as you change the K and change your distance metric and all those sorts of things. Okay, so then the question is once you're actually trying to use this algorithm in practice, there's several choices you need to make. We talked about choosing different values of K. We talked about choosing different distance metrics. And the question becomes how do you actually make these choices for your problem and for your data? So, these choices, of things like K and the distance metric, we call hyperparameters, because they are not necessarily learned from the training data, instead these are choices about your algorithm that you make ahead of time and there's no way to learn them directly from the data. So, the question is how do you set these things in practice? And they turn out to be very problem-dependent. And the simple thing that most people do is simply try different values of hyperparameters for your data and for your problem, and figure out which one works best. There's a question? [student asking a question] So, the question is, where L1 distance might be preferable to using L2 distance? I think it's mainly problem-dependent, it's sort of difficult to say in which cases you think one might be better than the other. but I think that because L1 has this sort of coordinate dependency, it actually depends on the coordinate system of your data, if you know that you have a vector, and maybe the individual elements of the vector have meaning. Like maybe you're classifying employees for some reason and then the different elements of that vector correspond to different features or aspects of an employee. Like their salary or the number of years they've been working at the company or something like that. So I think when your individual elements actually have some meaning, is where I think maybe using L1 might make a little bit more sense. But in general, again, this is a hyperparameter and it really depends on your problem and your data so the best answer is just to try them both and see what works better. Even this idea of trying out different values of hyperparameters and seeing what works best, there are many different choices here. What exactly does it mean to try hyperparameters and see what works best? Well, the first idea you might think of is simply choosing the hyperparameters that give you the best accuracy or best performance on your training data. This is actually a really terrible idea. You should never do this. In the concrete case of the nearest neighbor classifier, for example, if we set K=1, we will always classify the training data perfectly. So if we use this strategy we'll always pick K=1, but, as we saw from the examples earlier, in practice it seems that setting K equals to larger values might cause us to misclassify some of the training data, but, in fact, lead to better performance on points that were not in the training data. And ultimately in machine learning we don't care about fitting the training data, we really care about how our classifier, or how our method, will perform on unseen data after training. So, this is a terrible idea, don't do this. So, another idea that you might think of, is maybe we'll take our full dataset and we'll split it into some training data and some test data. And now I'll try training my algorithm with different choices of hyperparameters on the training data and then I'll go and apply that trained classifier on the test data and now I will pick the set of hyperparameters that cause me to perform best on the test data. This seems like maybe a more reasonable strategy, but, in fact, this is also a terrible idea and you should never do this. Because, again, the point of machine learning systems is that we want to know how our algorithm will perform. So, the point of the test set is to give us some estimate of how our method will do on unseen data that's coming out from the wild. And if we use this strategy of training many different algorithms with different hyperparameters, and then, selecting the one which does the best on the test data, then, it's possible, that we may have just picked the right set of hyperparameters that caused our algorithm to work quite well on this testing set, but now our performance on this test set will no longer be representative of our performance of new, unseen data. So, again, you should not do this, this is a bad idea, you'll get in trouble if you do this. What is much more common, is to actually split your data into three different sets. You'll partition most of your data into a training set and then you'll create a validation set and a test set. And now what we typically do is go and train our algorithm with many different choices of hyperparameters on the training set, evaluate on the validation set, and now pick the set of hyperparameters which performs best on the validation set. And now, after you've done all your development, you've done all your debugging, after you've dome everything, then you'd take that best performing classifier on the validation set and run it once on the test set. And now that's the number that goes into your paper, that's the number that goes into your report, that's the number that actually is telling you how your algorithm is doing on unseen data. And this is actually really, really important that you keep a very strict separation between the validation data and the test data. So, for example, when we're working on research papers, we typically only touch the test set at the very last minute. So, when I'm writing papers, I tend to only touch the test set for my problem in maybe the week before the deadline or so to really insure that we're not being dishonest here and we're not reporting a number which is unfair. So, this is actually super important and you want to make sure to keep your test data quite under control. So another strategy for setting hyperparameters is called cross validation. And this is used a little bit more commonly for small data sets, not used so much in deep learning. So here the idea is we're going to take our test data, or we're going to take our dataset, as usual, hold out some test set to use at the very end, and now, for the rest of the data, rather than splitting it into a single training and validation partition, instead, we can split our training data into many different folds. And now, in this way, we've cycled through choosing which fold is going to be the validation set. So now, in this example, we're using five fold cross validation, so you would train your algorithm with one set of hyperparameters on the first four folds, evaluate the performance on fold four, and now go and retrain your algorithm on folds one, two, three, and five, evaluate on fold four, and cycle through all the different folds. And, when you do it this way, you get much higher confidence about which hyperparameters are going to perform more robustly. So this is kind of the gold standard to use, but, in practice in deep learning when we're training large models and training is very computationally expensive, these doesn't get used too much in practice. Question? [student asking a question] Yeah, so the question is, a little bit more concretely, what's the difference between the training and the validation set? So, if you think about the k-nearest neighbor classifier then the training set is this set of images with labels where we memorize the labels. And now, to classify an image, we're going to take the image and compare it to each element in the training data, and then transfer the label from the nearest training point. So now our algorithm will memorize everything in the training set, and now we'll take each element of the validation set and compare it to each element in the training data and then use this to determine what is the accuracy of our classifier when it's applied on the validation set. So this is the distinction between training and validation. Where your algorithm is able to see the labels of the training set, but for the validation set, your algorithm doesn't have direct access to the labels. We only use the labels of the validation set to check how well our algorithm is doing. A question? [student asking a question] The question is, whether the test set, is it possible that the test set might not be representative of data out there in the wild? This definitely can be a problem in practice, the underlying statistical assumption here is that your data are all independently and identically distributed, so that all of your data points should be drawn from the same underlying probability distribution. Of course, in practice, this might not always be the case, and you definitely can run into cases where the test set might not be super representative of what you see in the wild. So this is kind of a problem that dataset creators and dataset curators need to think about. But when I'm creating datasets, for example, one thing I do, is I'll go and collect a whole bunch of data all at once, using the exact same methodology for collecting the data, and then afterwards you go and partition it randomly between train and test. One thing that can screw you up here is maybe if you're collecting data over time and you make the earlier data, that you collect first, be the training data, and the later data that you collect be the test data, then you actually might run into this shift that could cause problems. But as long as this partition is random among your entire set of data points, then that's how we try to alleviate this problem in practice. So then, once you've gone through this cross validation procedure, then you end up with graphs that look something like this. So here, on the X axis, we are showing the value of K for a k-nearest neighbor classifier on some problem, and now on the Y axis, we are showing what is the accuracy of our classifier on some dataset for different values of K. And you can see that, in this case, we've done five fold cross validation over the data, so, for each value of K we have five different examples of how well this algorithm is doing. And, actually, going back to the question about having some test sets that are better or worse for your algorithm, using K fold cross validation is maybe one way to help quantify that a little bit. And, in that, we can see the variance of how this algorithm performs on different of the validation folds. And that gives you some sense of, not just what is the best, but, also, what is the distribution of that performance. So, whenever you're training machine learning models you end up making plots like this, where they show you what is your accuracy, or your performance as a function of your hyperparameters, and then you want to go and pick the model, or the set of hyperparameters, at the end of the day, that performs the best on the validation set. So, here we see that maybe about K=7 probably works about best for this problem. So, k-nearest neighbor classifiers on images are actually almost never used in practice. Because, with all of these problems that we've talked about. So, one problem is that it's very slow at test time, which is the reverse of what we want, which we talked about earlier. Another problem is that these things like Euclidean distance, or L1 distance, are really not a very good way to measure distances between images. These, sort of, vectorial distance functions do not correspond very well to perceptual similarity between images. How you perceive differences between images. So, in this example, we've constructed, there's this image on the left of a girl, and then three different distorted images on the right where we've blocked out her mouth, we've actually shifted down by a couple pixels, or tinted the entire image blue. And, actually, if you compute the Euclidean distance between the original and the boxed, the original and the shuffled, and original in the tinted, they all have the same L2 distance. Which is, maybe, not so good because it sort of gives you the sense that the L2 distance is really not doing a very good job at capturing these perceptional distances between images. Another, sort of, problem with the k-nearest neighbor classifier has to do with something we call the curse of dimensionality. So, if you recall back this viewpoint we had of the k-nearest neighbor classifier, it's sort of dropping paint around each of the training data points and using that to sort of partition the space. So that means that if we expect the k-nearest neighbor classifier to work well, we kind of need our training examples to cover the space quite densely. Otherwise our nearest neighbors could actually be quite far away and might not actually be very similar to our testing points. And the problem is, that actually densely covering the space, means that we need a number of training examples, which is exponential in the dimension of the problem. So this is very bad, exponential growth is always bad, basically, you're never going to get enough images to densely cover this space of pixels in this high dimensional space. So that's maybe another thing to keep in mind when you're thinking about using k-nearest neighbor. So, kind of the summary is that we're using k-nearest neighbor to introduce this idea of image classification. We have a training set of images and labels and then we use that to predict these labels on the test set. Question? [student asking a question] Oh, sorry, the question is, what was going on with this picture? What are the green and the blue dots? So here, we have some training samples which are represented by points, and the color of the dot maybe represents the category of the point, of this training sample. So, if we're in one dimension, then you maybe only need four training samples to densely cover the space, but if we move to two dimensions, then, we now need, four times four is 16 training examples to densely cover this space. And if we move to three, four, five, many more dimensions, the number of training examples that we need to densely cover the space, grows exponentially with the dimension. So, this is kind of giving you the sense, that maybe in two dimensions we might have this kind of funny curved shape, or you might have sort of arbitrary manifolds of labels in different dimensional spaces. Because the k-nearest neighbor algorithm doesn't really make any assumptions about these underlying manifolds, the only way it can perform properly is if it has quite a dense sample of training points to work with. So, this is kind of the overview of k-nearest neighbors and you'll get a chance to actually implement this and try it out on images in the first assignment. So, if there's any last minute questions about K and N, I'm going to move on to the next topic. Question? [student is asking a question] Sorry, say that again. [student is asking a question] Yeah, so the question is, why do these images have the same L2 distance? And the answer is that, I carefully constructed them to have the same L2 distance. [laughing] But it's just giving you the sense that the L2 distance is not a very good measure of similarity between images. And these images are actually all different from each other in quite disparate ways. If you're using K and N, then the only thing you have to measure distance between images, is this single distance metric. And this kind of gives you an example where that distance metric is actually not capturing the full description of distance or difference between images. So, if this case, I just sort of carefully constructed these translations and these offsets to match exactly. Question? [student asking a question] So, the question is, maybe this is actually good, because all of these things are actually having the same distance to the image. That's maybe true for this example, but I think you could also construct examples where maybe we have two original images and then by putting the boxes in the right places or tinting them, we could cause it to be nearer to pretty much anything that you want, right? Because in this example, we can kind of like do arbitrary shifting and tinting to kind of change these distances nearly arbitrarily without changing the perceptional nature of these images. So, I think that this can actually screw you up if you have many different original images. Question? [student is asking a question] The question is, whether or not it's common in real-world cases to go back and retrain the entire dataset once you've found those best hyperparameters? So, people do sometimes do this in practice, but it's somewhat a matter of taste. If you're really rushing for that deadline and you've really got to get this model out the door then, if it takes a long time to retrain the model on the whole dataset, then maybe you won't do it. But if you have a little bit more time to spare and a little bit more compute to spare, and you want to squeeze out that maybe that extra 1% of performance, then that is a trick you can use. So we kind of saw that the k-nearest neighbor has a lot of the nice properties of machine learning algorithms, but in practice it's not so great, and really not used very much in images. So the next thing I'd like to talk about is linear classification. And linear classification is, again, quite a simple learning algorithm, but this will become super important and help us build up to whole neural networks and whole convolutional networks. So, one analogy people often talk about when working with neural networks is we think of them as being kind of like Lego blocks. That you can have different kinds of components of neural networks and you can stick these components together to build these large different towers of convolutional networks. One of the most basic building blocks that we'll see in different types of deep learning applications is this linear classifier. So, I think it's actually really important to have a good understanding of what's happening with linear classification. Because these will end up generalizing quite nicely to whole neural networks. So another example of kind of this modular nature of neural networks comes from some research in our own lab on image captioning, just as a little bit of a preview. So here the setup is that we want to input an image and then output a descriptive sentence describing the image. And the way this kind of works is that we have one convolutional neural network that's looking at the image, and a recurrent neural network that knows about language. And we can kind of just stick these two pieces together like Lego blocks and train the whole thing together and end up with a pretty cool system that can do some non-trivial things. And we'll work through the details of this model as we go forward in the class, but this just gives you the sense that, these deep neural networks are kind of like Legos and this linear classifier is kind of like the most basic building blocks of these giant networks. But that's a little bit too exciting for lecture two, so we have to go back to CIFAR-10 for the moment. [laughing] So, recall that CIFAR-10 has these 50,000 training examples, each image is 32 by 32 pixels and three color channels. In linear classification, we're going to take a bit of a different approach from k-nearest neighbor. So, the linear classifier is one of the simplest examples of what we call a parametric model. So now, our parametric model actually has two different components. It's going to take in this image, maybe, of a cat on the left, and this, that we usually write as X for our input data, and also a set of parameters, or weights, which is usually called W, also sometimes theta, depending on the literature. And now we're going to write down some function which takes in both the data, X, and the parameters, W, and this'll spit out now 10 numbers describing what are the scores corresponding to each of those 10 categories in CIFAR-10. With the interpretation that, like the larger score for cat, indicates a larger probability of that input X being cat. And now, a question? [student asking a question] Sorry, can you repeat that? [student asking a question] Oh, so the question is what is the three? The three, in this example, corresponds to the three color channels, red, green, and blue. Because we typically work on color images, that's nice information that you don't want to throw away. So, in the k-nearest neighbor setup there was no parameters, instead, we just kind of keep around the whole training data, the whole training set, and use that at test time. But now, in a parametric approach, we're going to summarize our knowledge of the training data and stick all that knowledge into these parameters, W. And now, at test time, we no longer need the actual training data, we can throw it away. We only need these parameters, W, at test time. So this allows our models to now be more efficient and actually run on maybe small devices like phones. So, kind of, the whole story in deep learning is coming up with the right structure for this function, F. You can imagine writing down different functional forms for how to combine weights and data in different complex ways, and these could correspond to different network architectures. But the simplest possible example of combining these two things is just, maybe, to multiply them. And this is a linear classifier. So here our F of X, W is just equal to the W times X. Probably the simplest equation you can imagine. So here, if you kind of unpack the dimensions of these things, we recall that our image was maybe 32 by 32 by 3 values. So then, we're going to take those values and then stretch them out into a long column vector that has 3,072 by one entries. And now we want to end up with 10 class scores. We want to end up with 10 numbers for this image giving us the scores for each of the 10 categories. Which means that now our matrix, W, needs to be ten by 3072. So that once we multiply these two things out then we'll end up with a single column vector 10 by one, giving us our 10 class scores. Also sometimes, you'll typically see this, we'll often add a bias term which will be a constant vector of 10 elements that does not interact with the training data, and instead just gives us some sort of data independent preferences for some classes over another. So you might imagine that if you're dataset was unbalanced and had many more cats than dogs, for example, then the bias elements corresponding to cat would be higher than the other ones. So if you kind of think about pictorially what this function is doing, in this figure we have an example on the left of a simple image with just a two by two image, so it has four pixels total. So the way that the linear classifier works is that we take this two by two image, we stretch it out into a column vector with four elements, and now, in this example, we are just restricting to three classes, cat, dog, and ship, because you can't fit 10 on a slide, and now our weight matrix is going to be four by three, so we have four pixels and three classes. And now, again, we have a three element bias vector that gives us data independent bias terms for each category. Now we see that the cat score is going to be the enter product between the pixels of our image and this row in the weight matrix added together with this bias term. So, when you look at it this way you can kind of understand linear classification as almost a template matching approach. Where each of the rows in this matrix correspond to some template of the image. And now the enter product or dot product between the row of the matrix and the column giving the pixels of the image, computing this dot product kind of gives us a similarity between this template for the class and the pixels of our image. And then bias just, again, gives you this data independence scaling offset to each of the classes. If we think about linear classification from this viewpoint of template matching we can actually take the rows of that weight matrix and unravel them back into images and actually visualize those templates as images. And this gives us some sense of what a linear classifier might actually be doing to try to understand our data. So, in this example, we've gone ahead and trained a linear classifier on our images. And now on the bottom we're visualizing what are those rows in that learned weight matrix corresponding to each of the 10 categories in CIFAR-10. And in this way we kind of get a sense for what's going on in these images. So, for example, in the left, on the bottom left, we see the template for the plane class, kind of consists of this like blue blob, this kind of blobby thing in the middle and maybe blue in the background, which gives you the sense that this linear classifier for plane is maybe looking for blue stuff and blobby stuff, and those features are going to cause the classifier to like planes more. Or if we look at this car example, we kind of see that there's a red blobby thing through the middle and a blue blobby thing at the top that maybe is kind of a blurry windshield. But this is a little bit weird, this doesn't really look like a car. No individual car actually looks like this. So the problem is that the linear classifier is only learning one template for each class. So if there's sort of variations in how that class might appear, it's trying to average out all those different variations, all those different appearances, and use just one single template to recognize each of those categories. We can also see this pretty explicitly in the horse classifier. So in the horse classifier we see green stuff on the bottom because horses are usually on grass. And then, if you look carefully, the horse actually seems to have maybe two heads, one head on each side. And I've never seen a horse with two heads. But the linear classifier is just doing the best that it can, because it's only allowed to learn one template per category. And as we move forward into neural networks and more complex models, we'll be able to achieve much better accuracy because they no longer have this restriction of just learning a single template per category. Another viewpoint of the linear classifier is to go back to this idea of images as points and high dimensional space. And you can imagine that each of our images is something like a point in this high dimensional space. And now the linear classifier is putting in these linear decision boundaries to try to draw linear separation between one category and the rest of the categories. So maybe up on the upper-left hand side we see these training examples of airplanes and throughout the process of training the linear classier will go and try to draw this blue line to separate out with a single line the airplane class from all the rest of the classes. And it's actually kind of fun if you watch during the training process these lines will start out randomly and then go and snap into place to try to separate the data properly. But when you think about linear classification in this way, from this high dimensional point of view, you can start to see again what are some of the problems that might come up with linear classification. And it's not too hard to construct examples of datasets where a linear classifier will totally fail. So, one example, on the left here, is that, suppose we have a dataset of two categories, and these are all maybe somewhat artificial, but maybe our dataset has two categories, blue and red. And the blue categories are the number of pixels in the image, which are greater than zero, is odd. And anything where the number of pixels greater than zero is even, we want to classify as the red category. So if you actually go and draw what these different decisions regions look like in the plane, you can see that our blue class with an odd number of pixels is going to be these two quadrants in the plane, and even will be the opposite two quadrants. So now, there's no way that we can draw a single linear line to separate the blue from the red. So this would be an example where a linear classifier would really struggle. And this is maybe not such an artificial thing after all. Instead of counting pixels, maybe we're actually trying to count whether the number of animals or people in an image is odd or even. So this kind of a parity problem of separating odds from evens is something that linear classification really struggles with traditionally. Other situations where a linear classifier really struggles are multimodal situations. So here on the right, maybe our blue category has these three different islands of where the blue category lives, and then everything else is some other category. So, for something like horses, we saw on the previous example, is something where this actually might be happening in practice. Where there's maybe one island in the pixel space of horses looking to the left, and another island of horses looking to the right. And now there's no good way to draw a single linear boundary between these two isolated islands of data. So anytime where you have multimodal data, like one class that can appear in different regions of space, is another place where linear classifiers might struggle. So there's kind of a lot of problems with linear classifiers, but it is a super simple algorithm, super nice and easy to interpret and easy to understand. So you'll actually be implementing these things on your first homework assignment. At this point, we kind of talked about what is the functional form corresponding to a linear classifier. And we've seen that this functional form of matrix vector multiply corresponds this idea of template matching and learning a single template for each category in your data. And then once we have this trained matrix you can use it to actually go and get your scores for any new training example. But what we have not told you is how do you actually go about choosing the right W for your dataset. We've just talked about what is the functional form and what is going on with this thing. So that's something we'll really focus on next time. And next lecture we'll talk about what are the strategies and algorithms for choosing the right W. And this will lead us to questions of loss functions and optimization and eventually ConvNets. So, that's a bit of the preview for next week. And that's all we have for today.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_3_Loss_Functions_and_Optimization.txt
- Okay so welcome to CS 231N Lecture three. Today we're going to talk about loss functions and optimization but as usual, before we get to the main content of the lecture, there's a couple administrative things to talk about. So the first thing is that assignment one has been released. You can find the link up on the website. And since we were a little bit late in getting this assignment out to you guys, we've decided to change the due date to Thursday, April 20th at 11:59 p.m., this will give you a full two weeks from the assignment release date to go and actually finish and work on it, so we'll update the syllabus for this new due date in a little bit later today. And as a reminder, when you complete the assignment, you should go turn in the final zip file on Canvas so we can grade it and get your grades back as quickly as possible. So the next thing is always check out Piazza for interesting administrative stuff. So this week I wanted to highlight that we have several example project ideas as a pinned post on Piazza. So we went out and solicited example of project ideas from various people in the Stanford community or affiliated to Stanford, and they came up with some interesting suggestions for projects that they might want students in the class to work on. So check out this pinned post on Piazza and if you want to work on any of these projects, then feel free to contact the project mentors directly about these things. Aditionally we posted office hours on the course website, this is a Google calendar, so this is something that people have been asking about and now it's up there. The final administrative note is about Google Cloud, as a reminder, because we're supported by Google Cloud in this class, we're able to give each of you an additional $100 credit for Google Cloud to work on your assignments and projects, and the exact details of how to redeem that credit will go out later today, most likely on Piazza. So if there's, I guess if there's no questions about administrative stuff then we'll move on to course content. Okay cool. So recall from last time in lecture two, we were really talking about the challenges of recognition and trying to hone in on this idea of a data-driven approach. We talked about this idea of image classification, talked about why it's hard, there's this semantic gap between the giant grid of numbers that the computer sees and the actual image that you see. We talked about various challenges regarding this around illumination, deformation, et cetera, and why this is actually a really, really hard problem even though it's super easy for people to do with their human eyes and human visual system. Then also recall last time we talked about the k-nearest neighbor classifier as kind of a simple introduction to this whole data-driven mindset. We talked about the CIFAR-10 data set where you can see an example of these images on the upper left here, where CIFAR-10 gives you these 10 different categories, airplane, automobile, whatnot, and we talked about how the k-nearest neighbor classifier can be used to learn decision boundaries to separate these data points into classes based on the training data. This also led us to a discussion of the idea of cross validation and setting hyper parameters by dividing your data into train, validation and test sets. Then also recall last time we talked about linear classification as the first sort of building block as we move toward neural networks. Recall that the linear classifier is an example of a parametric classifier where all of our knowledge about the training data gets summarized into this parameter matrix W that is set during the process of training. And this linear classifier recall is super simple, where we're going to take the image and stretch it out into a long vector. So here the image is x and then we take that image which might be 32 by 32 by 3 pixels, stretch it out into a long column vector of 32 times 32 times 3 entries, where the 32 and 32 are the height and width, and the 3 give you the three color channels, red, green, blue. Then there exists some parameter matrix, W which will take this long column vector representing the image pixels, and convert this and give you 10 numbers giving scores for each of the 10 classes in the case of CIFAR-10. Where we kind of had this interpretation where larger values of those scores, so a larger value for the cat class means the classifier thinks that the cat is more likely for that image, and lower values for maybe the dog or car class indicate lower probabilities of those classes being present in the image. Also, so I think this point was a little bit unclear last time that linear classification has this interpretation as learning templates per class, where if you look at the diagram on the lower left, you think that, so for every pixel in the image, and for every one of our 10 classes, there exists some entry in this matrix W, telling us how much does that pixel influence that class. So that means that each of these rows in the matrix W ends up corresponding to a template for the class. And if we take those rows and unravel, so each of those rows again corresponds to a weighting between the values of, between the pixel values of the image and that class, so if we take that row and unravel it back into an image, then we can visualize the learned template for each of these classes. We also had this interpretation of linear classification as learning linear decision boundaries between pixels in some high dimensional space where the dimensions of the space correspond to the values of the pixel intensity values of the image. So this is kind of where we left off last time. And so where we kind of stopped, where we ended up last time is we got this idea of a linear classifier, and we didn't talk about how to actually choose the W. How to actually use the training data to determine which value of W should be best. So kind of where we stopped off at is that for some setting of W, we can use this W to come up with 10 with our class scores for any image. So and some of these class scores might be better or worse. So here in this simple example, we've shown maybe just a training data set of three images along with the 10 class scores predicted for some value of W for those images. And you can see that some of these scores are better or worse than others. So for example in the image on the left, if you look up, it's actually a cat because you're a human and you can tell these things, but if we look at the assigned probabilities, cat, well not probabilities but scores, then the classifier maybe for this setting of W gave the cat class a score of 2.9 for this image, whereas the frog class gave 3.78. So maybe the classifier is not doing not so good on this image, that's bad, we wanted the true class to be actually the highest class score, whereas for some of these other examples, like the car for example, you see that the automobile class has a score of six which is much higher than any of the others, so that's good. And the frog, the predicted scores are maybe negative four, which is much lower than all the other ones, so that's actually bad. So this is kind of a hand wavy approach, just kind of looking at the scores and eyeballing which ones are good and which ones are bad. But to actually write algorithms about these things and to actually to determine automatically which W will be best, we need some way to quantify the badness of any particular W. And that's this function that takes in a W, looks at the scores and then tells us how bad quantitatively is that W, is something that we'll call a loss function. And in this lecture we'll see a couple examples of different loss functions that you can use for this image classification problem. So then once we've got this idea of a loss function, this allows us to quantify for any given value of W, how good or bad is it? But then we actually need to find and come up with an efficient procedure for searching through the space of all possible Ws and actually come up with what is the correct value of W that is the least bad, and this process will be an optimization procedure and we'll talk more about that in this lecture. So I'm going to shrink this example a little bit because 10 classes is a little bit unwieldy. So we'll kind of work with this tiny toy data set of three examples and three classes going forward in this lecture. So again, in this example, the cat is maybe not so correctly classified, the car is correctly classified, and the frog, this setting of W got this frog image totally wrong, because the frog score is much lower than others. So to formalize this a little bit, usually when we talk about a loss function, we imagine that we have some training data set of xs and ys, usually N examples of these where the xs are the inputs to the algorithm in the image classification case, the xs would be the actually pixel values of your images, and the ys will be the things you want your algorithm to predict, we usually call these the labels or the targets. So in the case of image classification, remember we're trying to categorize each image for CIFAR-10 to one of 10 categories, so the label y here will be an integer between one and 10 or maybe between zero and nine depending on what programming language you're using, but it'll be an integer telling you what is the correct category for each one of those images x. And now our loss function will denote L_i to denote the, so then we have this prediction function x which takes in our example x and our weight matrix W and makes some prediction for y, in the case of image classification these will be our 10 numbers. Then we'll define some loss function L_i which will take in the predicted scores coming out of the function f together with the true target or label Y and give us some quantitative value for how bad those predictions are for that training example. And now the final loss L will be the average of these losses summed over the entire data set over each of the N examples in our data set. So this is actually a very general formulation, and actually extends even beyond image classification. Kind of as we move forward and see other tasks, other examples of tasks and deep learning, the kind of generic setup is that for any task you have some xs and ys and you want to write down some loss function that quantifies exactly how happy you are with your particular parameter settings W and then you'll eventually search over the space of W to find the W that minimizes the loss on your training data. So as a first example of a concrete loss function that is a nice thing to work with in image classification, we'll talk about the multi-class SVM loss. You may have seen the binary SVM, our support vector machine in CS 229 and the multiclass SVM is a generalization of that to handle multiple classes. In the binary SVM case as you may have seen in 229, you only had two classes, each example x was going to be classified as either positive or negative example, but now we have 10 categories, so we need to generalize this notion to handle multiple classes. So this loss function has kind of a funny functional form, so we'll walk through it in a bit more, in quite a bit of detail over the next couple of slides. But what this is saying is that the loss L_i for any individual example, the way we'll compute it is we're going to perform a sum over all of the categories, Y, except for the true category, Y_i, so we're going to sum over all the incorrect categories, and then we're going to compare the score of the correct category, and the score of the incorrect category, and now if the score for the correct category is greater than the score of the incorrect category, greater than the incorrect score by some safety margin that we set to one, if that's the case that means that the true score is much, or the score for the true category is if it's much larger than any of the false categories, then we'll get a loss of zero. And we'll sum this up over all of the incorrect categories for our image and this will give us our final loss for this one example in the data set. And again we'll take the average of this loss over the whole training data set. So this kind of like if then statement, like if the true class score is much larger than the others, this kind of if then formulation we often compactify into this single max of zero S_j minus S_Yi plus one thing, but I always find that notation a little bit confusing, and it always helps me to write it out in this sort of case based notation to figure out exactly what the two cases are and what's going on. And by the way, this style of loss function where we take max of zero and some other quantity is often referred to as some type of a hinge loss, and this name comes from the shape of the graph when you go and plot it, so here the x axis corresponds to the S_Yi, that is the score of the true class for some training example, and now the y axis is the loss, and you can see that as the score for the true category for this example increases, then the loss will go down linearly until we get to above this safety margin, after which the loss will be zero because we've already correctly classified this example. So let's, oh, question? - [Student] Sorry, in terms of notation what is S underscore Yi? Is that your right score? - Yeah, so the question is in terms of notation, what is S and what is SYI in particular, so the Ss are the predicted scores for the classes that are coming out of the classifier. So if one is the cat class and two is the dog class then S1 and S2 would be the cat and dog scores respectively. And remember we said that Yi was the category of the ground truth label for the example which is some integer. So then S sub Y sub i, sorry for the double subscript, that corresponds to the score of the true class for the i-th example in the training set. Question? - [Student] So what exactly is this computing? - Yeah the question is what exactly is this computing here? It's a little bit funny, I think it will become more clear when we walk through an explicit example, but in some sense what this loss is saying is that we are happy if the true score is much higher than all the other scores. It needs to be higher than all the other scores by some safety margin, and if the true score is not high enough, greater than any of the other scores, then we will incur some loss and that would be bad. So this might make a little bit more sense if we walk through an explicit example for this tiny three example data set. So here remember I've sort of removed the case space notation and just switching back to the zero one notation, and now if we look at, if we think about computing this multi-class SVM loss for just this first training example on the left, then remember we're going to loop over all of the incorrect classes, so for this example, cat is the correct class, so we're going to loop over the car and frog classes, and now for car, we're going to compare the, we're going to look at the car score, 5.1, minus the cat score, 3.2 plus one, when we're comparing cat and car we expect to incur some loss here because the car score is greater than the cat score which is bad. So for this one class, for this one example, we'll incur a loss of 2.9, and then when we go and compare the cat score and the frog score we see that cat is 3.2, frog is minus 1.7, so cat is more than one greater than frog, which means that between these two classes we incur zero loss. So then the multiclass SVM loss for this training example will be the sum of the losses across each of these pairs of classes, which will be 2.9 plus zero which is 2.9. Which is sort of saying that 2.9 is a quantitative measure of how much our classifier screwed up on this one training example. And then if we repeat this procedure for this next car image, then again the true class is car, so we're going to iterate over all the other categories when we compare the car and the cat score, we see that car is more than one greater than cat so we get no loss here. When we compare car and frog, we again see that the car score is more than one greater than frog, so we get again no loss here, and our total loss for this training example is zero. And now I think you hopefully get the picture by now, but, if you go look at frog, now frog, we again compare frog and cat, incur quite a lot of loss because the frog score is very low, compare frog and car, incur a lot of loss because the score is very low, and then our loss for this example is 12.9. And then our final loss for the entire data set is the average of these losses across the different examples, so when you sum those out it comes to about 5.3. So then it's sort of, this is our quantitative measure that our classifier is 5.3 bad on this data set. Is there a question? - [Student] How do you choose the plus one? - Yeah, the question is how do you choose the plus one? That's actually a really great question, it seems like kind of an arbitrary choice here, it's the only constant that appears in the loss function and that seems to offend your aesthetic sensibilities a bit maybe. But it turns out that this is somewhat of an arbitrary choice, because we don't actually care about the absolute values of the scores in this loss function, we only care about the relative differences between the scores. We only care that the correct score is much greater than the incorrect scores. So in fact if you imagine scaling up your whole W up or down, then it kind of rescales all the scores correspondingly and if you kind of work through the details and there's a detailed derivation of this in the course notes online, you find this choice of one actually doesn't matter. That this free parameter of one kind of washes out and is canceled with this scale, like the overall setting of the scale in W. And again, check the course notes for a bit more detail on that. So then I think it's kind of useful to think about a couple different questions to try to understand intuitively what this loss is doing. So the first question is what's going to happen to the loss if we change the scores of the car image just a little bit? Any ideas? Everyone's too scared to ask a question? Answer? [student speaking faintly] - Yeah, so the answer is that if we jiggle the scores for this car image a little bit, the loss will not change. So the SVM loss, remember, the only thing it cares about is getting the correct score to be greater than one more than the incorrect scores, but in this case, the car score is already quite a bit larger than the others, so if the scores for this class changed for this example changed just a little bit, this margin of one will still be retained and the loss will not change, we'll still get zero loss. The next question, what's the min and max possible loss for SVM? [student speaking faintly] Oh I hear some murmurs. So the minimum loss is zero, because if you can imagine that across all the classes, if our correct score was much larger then we'll incur zero loss across all the classes and it will be zero, and if you think back to this hinge loss plot that we had, then you can see that if the correct score goes very, very negative, then we could incur potentially infinite loss. So the min is zero and the max is infinity. Another question, sort of when you initialize these things and start training from scratch, usually you kind of initialize W with some small random values, as a result your scores tend to be sort of small uniform random values at the beginning of training. And then the question is that if all of your Ss, if all of the scores are approximately zero and approximately equal, then what kind of loss do you expect when you're using multiclass SVM? - [Student] Number of classes minus one. - Yeah, so the answer is number of classes minus one, because remember that if we're looping over all of the incorrect classes, so we're looping over C minus one classes, within each of those classes the two Ss will be about the same, so we'll get a loss of one because of the margin and we'll get C minus one. So this is actually kind of useful because when you, this is a useful debugging strategy when you're using these things, that when you start off training, you should think about what you expect your loss to be, and if the loss you actually see at the start of training at that first iteration is not equal to C minus one in this case, that means you probably have a bug and you should go check your code, so this is actually kind of a useful thing to be checking in practice. Another question, what happens if, so I said we're summing an SVM over the incorrect classes, what happens if the sum is also over the correct class if we just go over everything? - [Student] The loss increases by one. - Yeah, so the answer is that the loss increases by one. And I think the reason that we do this in practice is because normally loss of zero is kind of, has this nice interpretation that you're not losing at all, so that's nice, so I think your answers wouldn't really change, you would end up finding the same classifier if you actually looped over all the categories, but if just by conventions we omit the correct class so that our minimum loss is zero. So another question, what if we used mean instead of sum here? - [Student] Doesn't change. - Yeah, the answer is that it doesn't change. So the number of classes is going to be fixed ahead of time when we select our data set, so that's just rescaling the whole loss function by a constant, so it doesn't really matter, it'll sort of wash out with all the other scale things because we don't actually care about the true values of the scores, or the true value of the loss for that matter. So now here's another example, what if we change this loss formulation and we actually added a square term on top of this max? Would this end up being the same problem or would this be a different classification algorithm? - [Student] Different. - Yes, this would be different. So here the idea is that we're kind of changing the trade-offs between good and badness in kind of a nonlinear way, so this would end up actually computing a different loss function. This idea of a squared hinge loss actually does get used sometimes in practice, so that's kind of another trick to have in your bag when you're making up your own loss functions for your own problems. So now you'll end up, oh, was there a question? - [Student] Why would you use a squared loss instead of a non-squared loss? - Yeah, so the question is why would you ever consider using a squared loss instead of a non-squared loss? And the whole point of a loss function is to kind of quantify how bad are different mistakes. And if the classifier is making different sorts of mistakes, how do we weigh off the different trade-offs between different types of mistakes the classifier might make? So if you're using a squared loss, that sort of says that things that are very, very bad are now going to be squared bad so that's like really, really bad, like we don't want anything that's totally catastrophically misclassified, whereas if you're using this hinge loss, we don't actually care between being a little bit wrong and being a lot wrong, being a lot wrong kind of like, if an example is a lot wrong, and we increase it and make it a little bit less wrong, that's kind of the same goodness as an example which was only a little bit wrong and then increasing it to be a little bit more right. So that's a little bit hand wavy, but this idea of using a linear versus a square is a way to quantify how much we care about different categories of errors. And this is definitely something that you should think about when you're actually applying these things in practice, because the loss function is the way that you tell your algorithm what types of errors you care about and what types of errors it should trade off against. So that's actually super important in practice depending on your application. So here's just a little snippet of sort of vectorized code in numpy, and you'll end up implementing something like this for the first assignment, but this kind of gives you the sense that this sum is actually like pretty easy to implement in numpy, it only takes a couple lines of vectorized code. And you can see in practice, like one nice trick is that we can actually go in here and zero out the margins corresponding to the correct class, and that makes it easy to then just, that's sort of one nice vectorized trick to skip, iterate over all but one class. You just kind of zero out the one you want to skip and then compute the sum anyway, so that's a nice trick you might consider using on the assignment. So now, another question about this loss function. Suppose that you were lucky enough to find a W that has loss of zero, you're not losing at all, you're totally winning, this loss function is crushing it, but then there's a question, is this W unique or were there other Ws that could also have achieved zero loss? - [Student] There are other Ws. - Answer, yeah, so there are definitely other Ws. And in particular, because we talked a little bit about this thing of scaling the whole problem up or down depending on W, so you could actually take W multiplied by two and this doubled W (Is it quad U now? I don't know.) [laughing] This would also achieve zero loss. So as a concrete example of this, you can go back to your favorite example and maybe work through the numbers a little bit later, but if you're taking W and we double W, then the margins between the correct and incorrect scores will also double. So that means that if all these margins were already greater than one, and we doubled them, they're still going to be greater than one, so you'll still have zero loss. And this is kind of interesting, because if our loss function is the way that we tell our classifier which W we want and which W we care about, this is a little bit weird, now there's this inconsistency and how is the classifier to choose between these different versions of W that all achieve zero loss? And that's because what we've done here is written down only a loss in terms of the data, and we've only told our classifier that it should try to find the W that fits the training data. But really in practice, we don't actually care that much about fitting the training data, the whole point of machine learning is that we use the training data to find some classifier and then we'll apply that thing on test data. So we don't really care about the training data performance, we really care about the performance of this classifier on test data. So as a result, if the only thing we're telling our classifier to do is fit the training data, then we can lead ourselves into some of these weird situations sometimes, where the classifier might have unintuitive behavior. So a concrete, canonical example of this sort of thing, by the way, this is not linear classification anymore, this is a little bit of a more general machine learning concept, is that suppose we have this data set of blue points, and we're going to fit some curve to the training data, the blue points, then if the only thing we've told our classifier to do is to try and fit the training data, it might go in and have very wiggly curves to try to perfectly classify all of the training data points. But this is bad, because we don't actually care about this performance, we care about the performance on the test data. So now if we have some new data come in that sort of follows the same trend, then this very wiggly blue line is going to be totally wrong. And in fact, what we probably would have preferred the classifier to do was maybe predict this straight green line, rather than this very complex wiggly line to perfectly fit all the training data. And this is a core fundamental problem in machine learning, and the way we usually solve it, is this concept of regularization. So here we're going to add an additional term to the loss function. In addition to the data loss, which will tell our classifier that it should fit the training data, we'll also typically add another term to the loss function called a regularization term, which encourages the model to somehow pick a simpler W, where the concept of simple kind of depends on the task and the model. There's this whole idea of Occam's Razor, which is this fundamental idea in scientific discovery more broadly, which is that if you have many different competing hypotheses, that could explain your observations, you should generally prefer the simpler one, because that's the explanation that is more likely to generalize to new observations in the future. And the way we operationalize this intuition in machine learning is typically through some explicit regularization penalty that's often written down as R. So then your standard loss function usually has these two terms, a data loss and a regularization loss, and there's some hyper-parameter here, lambda, that trades off between the two. And we talked about hyper-parameters and cross-validation in the last lecture, so this regularization hyper-parameter lambda will be one of the more important ones that you'll need to tune when training these models in practice. Question? - [Student] What does that lambda R W term have to do with [speaking faintly]. - Yeah, so the question is, what's the connection between this lambda R W term and actually forcing this wiggly line to become a straight green line? I didn't want to go through the derivation on this because I thought it would lead us too far astray, but you can imagine, maybe you're doing a regression problem, in terms of different polynomial basis functions, and if you're adding this regression penalty, maybe the model has access to polynomials of very high degree, but through this regression term you could encourage the model to prefer polynomials of lower degree, if they fit the data properly, or if they fit the data relatively well. So you could imagine there's two ways to do this, either you can constrain your model class to just not contain the more powerful, more complex models, or you can add this soft penalty where the model still has access to more complex models, maybe high degree polynomials in this case, but you add this soft constraint saying that if you want to use these more complex models, you need to overcome this penalty for using their complexity. So that's the connection here, that is not quite linear classification, this is the picture that many people have in mind when they think about regularization at least. So there's actually a lot of different types of regularization that get used in practice. The most common one is probably L2 regularization, or weight decay. But there's a lot of other ones that you might see. This L2 regularization is just the euclidean norm of this weight vector W, or sometimes the squared norm. Or sometimes half the squared norm because it makes your derivatives work out a little bit nicer. But the idea of L2 regularization is you're just penalizing the euclidean norm of this weight vector. You might also sometimes see L1 regularization, where we're penalizing the L1 norm of the weight vector, and the L1 regularization has some nice properties like encouraging sparsity in this matrix W. Some other things you might see would be this elastic net regularization, which is some combination of L1 and L2. You sometimes see max norm regularization, penalizing the max norm rather than the L1 or L2 norm. But these sorts of regularizations are things that you see not just in deep learning, but across many areas of machine learning and even optimization more broadly. In some later lectures, we'll also see some types of regularization that are more specific to deep learning. For example dropout, we'll see in a couple lectures, or batch normalization, stochastic depth, these things get kind of crazy in recent years. But the whole idea of regularization is just any thing that you do to your model, that sort of penalizes somehow the complexity of the model, rather than explicitly trying to fit the training data. Question? [student speaking faintly] Yeah, so the question is, how does the L2 regularization measure the complexity of the model? Thankfully we have an example of that right here, maybe we can walk through. So here we maybe have some training example, x, and there's two different Ws that we're considering. So x is just this vector of four ones, and we're considering these two different possibilities for W. One is a one in the first, one is a single one and three zeros, and the other has this 0.25 spread across the four different entries. And now, when we're doing linear classification, we're really taking dot products between our x and our W. So in terms of linear classification, these two Ws are the same, because they give the same result when dot producted with x. But now the question is, if you look at these two examples, which one would L2 regression prefer? Yeah, so L2 regression would prefer W2, because it has a smaller norm. So the answer is that the L2 regression measures complexity of the classifier in this relatively coarse way, where the idea is that, remember the Ws in linear classification had this interpretation of how much does this value of the vector x correspond to this output class? So L2 regularization is saying that it prefers to spread that influence across all the different values in x. Maybe this might be more robust, in case you come up with xs that vary, then our decisions are spread out and depend on the entire x vector, rather than depending only on certain elements of the x vector. And by the way, L1 regularization has this opposite interpretation. So actually if we were using L1 regularization, then we would actually prefer W1 over W2, because L1 regularization has this different notion of complexity, saying that maybe the model is less complex, maybe we measure model complexity by the number of zeros in the weight vector, so the question of how do we measure complexity and how does L2 measure complexity? They're kind of problem dependent. And you have to think about for your particular setup, for your particular model and data, how do you think that complexity should be measured on this task? Question? - [Student] So why would L1 prefer W1? Don't they sum to the same one? - Oh yes, you're right. So in this case, L1 is actually the same between these two. But you could construct a similar example to this where W1 would be preferred by L1 regularization. I guess the general intuition behind L1 is that it generally prefers sparse solutions, that it drives all your entries of W to zero for most of the entries, except for a couple where it's allowed to deviate from zero. The way of measuring complexity for L1 is maybe the number of non-zero entries, and then for L2, it thinks that things that spread the W across all the values are less complex. So it depends on your data, depends on your problem. Oh and by the way, if you're a hardcore Bayesian, then using L2 regularization has this nice interpretation of MAP inference under a Gaussian prior on the parameter vector. I think there was a homework problem about that in 229, but we won't talk about that for the rest of the quarter. That's sort of my long, deep dive into the multi-class SVM loss. Question? - [Student] Yeah, so I'm still confused about what the kind of stuff I need to do when the linear versus polynomial thing, because the use of this loss function isn't going to change the fact that you're just doing, you're looking at a linear classifier, right? - Yeah, so the question is that, adding a regularization is not going to change the hypothesis class. This is not going to change us away from a linear classifier. The idea is that maybe this example of this polynomial regression is definitely not linear regression. That could be seen as linear regression on top of a polynomial expansion of the input, and in which case, this regression sort of says that you're not allowed to use as many polynomial coefficients as maybe you should have. Right, so you can imagine this is like, when you're doing polynomial regression, you can write out a polynomial as f of x equals A zero plus A one x plus A two x squared plus A three x whatever, in that case your parameters, your Ws, would be these As, in which case, penalizing the W could force it towards lower degree polynomials. Except in the case of polynomial regression, you don't actually want to parameterize in terms of As, there's some other paramterization that you want to use, but that's the general idea, that you're sort of penalizing the parameters of the model to force it towards the simpler hypotheses within your hypothesis class. And maybe we can take this offline if that's still a bit confusing. So then we've sort of seen this multi-class SVM loss, and just by the way as a side note, this is one extension or generalization of the SVM loss to multiple classes, there's actually a couple different formulations that you can see around in literature, but I mean, my intuition is that they all tend to work similarly in practice, at least in the context of deep learning. So we'll stick with this one particular formulation of the multi-class SVM loss in this class. But of course there's many different loss functions you might imagine. And another really popular choice, in addition to the multi-class SVM loss, another really popular choice in deep learning is this multinomial logistic regression, or a softmax loss. And this one is probably actually a bit more common in the context of deep learning, but I decided to present this second for some reason. So remember in the context of the multi-class SVM loss, we didn't actually have an interpretation for those scores. Remember, when we're doing some classification, our model F, spits our these 10 numbers, which are our scores for the classes, and for the multi-class SVM, we didn't actually give much interpretation to those scores. We just said that we want the true score, the score of the correct class to be greater than the incorrect classes, and beyond that we don't really say what those scores mean. But now, for the multinomial logistic regression loss function, we actually will endow those scores with some additional meaning. And in particular we're going to use those scores to compute a probability distribution over our classes. So we use this so-called softmax function where we take all of our scores, we exponentiate them so that now they become positive, then we re-normalize them by the sum of those exponents so now after we send our scores through this softmax function, now we end up with this probability distribution, where now we have probabilities over our classes, where each probability is between zero and one, and the sum of probabilities across all classes sum to one. And now the interpretation is that we want, there's this computed probability distribution that's implied by our scores, and we want to compare this with the target or true probability distribution. So if we know that the thing is a cat, then the target probability distribution would put all of the probability mass on cat, so we would have probability of cat equals one, and zero probability for all the other classes. So now what we want to do is encourage our computed probability distribution that's coming out of this softmax function to match this target probability distribution that has all the mass on the correct class. And the way that we do this, I mean, you can do this equation in many ways, you can do this as a KL divergence between the target and the computed probability distribution, you can do this as a maximum likelihood estimate, but at the end of the day, what we really want is that the probability of the true class is high and as close to one. So then our loss will now be the negative log of the probability of the true class. This is confusing 'cause we're putting this through multiple different things, but remember we wanted the probability to be close to one, so now log is a monotonic function, it goes like this, and it turns out mathematically, it's easier to maximize log than it is to maximize the raw probability, so we stick with log. And now log is monotonic, so if we maximize log P of correct class, that means we want that to be high, but loss functions measure badness not goodness so we need to put in the minus one to make it go the right way. So now our loss function for SVM is going to be the minus log of the probability of the true class. Yeah, so that's the summary here, is that we take our scores, we run through the softmax, and now our loss is this minus log of the probability of the true class. Okay, so then if you look at what this looks like on a concrete example, then we go back to our favorite beautiful cat with our three examples and we've got these three scores that are coming out of our linear classifier, and these scores are exactly the way that they were in the context of the SVM loss. But now, rather than taking these scores and putting them directly into our loss function, we're going to take them all and exponentiate them so that they're all positive, and then we'll normalize them to make sure that they all sum to one. And now our loss will be the minus log of the true class score. So that's the softmax loss, also called multinomial logistic regression. So now we asked several questions to try to gain intuition about the multi-class SVM loss, and it's useful to think about some of the same questions to contrast with the softmax loss. So then the question is, what's the min and max value of the softmax loss? Okay, maybe not so sure, there's too many logs and sums and stuff going on in here. So the answer is that the min loss is zero and the max loss is infinity. And the way that you can see this, the probability distribution that we want is one on the correct class, zero on the incorrect classes, the way that we do that is, so if that were the case, then this thing inside the log would end up being one, because it's the log probability of the true class, then log of one is zero, minus log of one is still zero. So that means that if we got the thing totally right, then our loss would be zero. But by the way, in order to get the thing totally right, what would our scores have to look like? Murmuring, murmuring. So the scores would actually have to go quite extreme, like towards infinity. So because we actually have this exponentiation, this normalization, the only way we can actually get a probability distribution of one and zero, is actually putting an infinite score for the correct class, and minus infinity score for all the incorrect classes. And computers don't do so well with infinities, so in practice, you'll never get to zero loss on this thing with finite precision. But you still have this interpretation that zero is the theoretical minimum loss here. And the maximum loss is unbounded. So suppose that if we had zero probability mass on the correct class, then you would have minus log of zero, log of zero is minus infinity, so minus log of zero would be plus infinity, so that's really bad. But again, you'll never really get here because the only way you can actually get this probability to be zero, is if e to the correct class score is zero, and that can only happen if that correct class score is minus infinity. So again, you'll never actually get to these minimum, maximum values with finite precision. So then, remember we had this debugging, sanity check question in the context of the multi-class SVM, and we can ask the same for the softmax. If all the Ss are small and about zero, then what is the loss here? Yeah, answer? - [Student] Minus log one over C. - So minus log of one over C? I think that's, yeah, so then it'd be minus log of one over C, because log can flip the thing so then it's just log of C. Yeah, so it's just log of C. And again, this is a nice debugging thing, if you're training a model with this softmax loss, you should check at the first iteration. If it's not log C, then something's gone wrong. So then we can compare and contrast these two loss functions a bit. In terms of linear classification, this setup looks the same. We've got this W matrix that gets multiplied against our input to produce this specter of scores, and now the difference between the two loss functions is how we choose to interpret those scores to quantitatively measure the badness afterwards. So for SVM, we were going to go in and look at the margins between the scores of the correct class and the scores of the incorrect class, whereas for this softmax or cross-entropy loss, we're going to go and compute a probability distribution and then look at the minus log probability of the correct class. So sometimes if you look at, in terms of, nevermind, I'll skip that point. [laughing] So another question that's interesting when contrasting these two loss functions is thinking, suppose that I've got this example point, and if you change its scores, so assume that we've got three scores for this, ignore the part on the bottom. But remember if we go back to this example where in the multi-class SVM loss, when we had the car, and the car score was much better than all the incorrect classes, then jiggling the scores for that car image didn't change the multi-class SVM loss at all, because the only thing that the SVM loss cared about was getting that correct score to be greater than a margin above the incorrect scores. But now the softmax loss is actually quite different in this respect. The softmax loss actually always wants to drive that probability mass all the way to one. So even if you're giving very high score to the correct class, and very low score to all the incorrect classes, softmax will want you to pile more and more probability mass on the correct class, and continue to push the score of that correct class up towards infinity, and the score of the incorrect classes down towards minus infinity. So that's the interesting difference between these two loss functions in practice. That SVM, it'll get this data point over the bar to be correctly classified and then just give up, it doesn't care about that data point any more. Whereas softmax will just always try to continually improve every single data point to get better and better and better and better. So that's an interesting difference between these two functions. In practice, I think it tends not to make a huge difference which one you choose, they tend to perform pretty similarly across, at least a lot of deep learning applications. But it is very useful to keep some of these differences in mind. Yeah, so to recap where we've come to from here, is that we've got some data set of xs and ys, we use our linear classifier to get some score function, to compute our scores S, from our inputs, x, and then we'll use a loss function, maybe softmax or SVM or some other loss function to compute how quantitatively bad were our predictions compared to this ground true targets, y. And then we'll often augment this loss function with a regularization term, that tries to trade off between fitting the training data and preferring simpler models. So this is a pretty generic overview of a lot of what we call supervised learning, and what we'll see in deep learning as we move forward, is that generally you'll want to specify some function, f, that could be very complex in structure, specify some loss function that determines how well your algorithm is doing, given any value of the parameters, some regularization term for how to penalize model complexity and then you combine these things together and try to find the W that minimizes this final loss function. But then the question is, how do we actually go about doing that? How do we actually find this W that minimizes the loss? And that leads us to the topic of optimization. So when we're doing optimization, I usually think of things in terms of walking around some large valley. So the idea is that you're walking around this large valley with different mountains and valleys and streams and stuff, and every point on this landscape corresponds to some setting of the parameters W. And you're this little guy who's walking around this valley, and you're trying to find, and the height of each of these points, sorry, is equal to the loss incurred by that setting of W. And now your job as this little man walking around this landscape, you need to somehow find the bottom of this valley. And this is kind of a hard problem in general. You might think, maybe I'm really smart and I can think really hard about the analytic properties of my loss function, my regularization all that, maybe I can just write down the minimizer, and that would sort of correspond to magically teleporting all the way to the bottom of this valley. But in practice, once your prediction function, f, and your loss function and your regularizer, once these things get big and complex and using neural networks, there's really not much hope in trying to write down an explicit analytic solution that takes you directly to the minima. So in practice we tend to use various types of iterative methods where we start with some solution and then gradually improve it over time. So the very first, stupidest thing that you might imagine is random search, that will just take a bunch of Ws, sampled randomly, and throw them into our loss function and see how well they do. So spoiler alert, this is a really bad algorithm, you probably shouldn't use this, but at least it's one thing you might imagine trying. And we can actually do this, we can actually try to train a linear classifier via random search, for CIFAR-10 and for this there's 10 classes, so random chance is 10%, and if we did some number of random trials, we eventually found just through sheer dumb luck, some setting of W that got maybe 15% accuracy. So it's better than random, but state of the art is maybe 95% so we've got a little bit of gap to close here. So again, probably don't use this in practice, but you might imagine that this is something you could potentially do. So in practice, maybe a better strategy is actually using some of the local geometry of this landscape. So if you're this little guy who's walking around this landscape, maybe you can't see directly the path down to the bottom of the valley, but what you can do is feel with your foot and figure out what is the local geometry, if I'm standing right here, which way will take me a little bit downhill? So you can feel with your feet and feel where is the slope of the ground taking me down a little bit in this direction? And you can take a step in that direction, and then you'll go down a little bit, feel again with your feet to figure out which way is down, and then repeat over and over again and hope that you'll end up at the bottom of the valley eventually. So this also seems like a relatively simple algorithm, but actually this one tends to work really well in practice if you get all the details right. So this is generally the strategy that we'll follow when training these large neural networks and linear classifiers and other things. So then, that was a little hand wavy, so what is slope? If you remember back to your calculus class, then at least in one dimension, the slope is the derivative of this function. So if we've got some one-dimensional function, f, that takes in a scalar x, and then outputs the height of some curve, then we can compute the slope or derivative at any point by imagining, if we take a small step, h, in any direction, take a small step, h, and compare the difference in the function value over that step and then drag the step size to zero, that will give us the slope of that function at that point. And this generalizes quite naturally to multi-variable functions as well. So in practice, our x is maybe not a scalar but a whole vector, 'cause remember, x might be a whole vector, so we need to generalize this notion to multi-variable things. And the generalization that we use of the derivative in the multi-variable setting is the gradient, so the gradient is a vector of partial derivatives. So the gradient will have the same shape as x, and each element of the gradient will tell us what is the slope of the function f, if we move in that coordinate direction. And the gradient turns out to have these very nice properties, so the gradient is now a vector of partial derivatives, but it points in the direction of greatest increase of the function and correspondingly, if you look at the negative gradient direction, that gives you the direction of greatest decrease of the function. And more generally, if you want to know, what is the slope of my landscape in any direction? Then that's equal to the dot product of the gradient with the unit vector describing that direction. So this gradient is super important, because it gives you this linear, first-order approximation to your function at your current point. So in practice, a lot of deep learning is about computing gradients of your functions and then using those gradients to iteratively update your parameter vector. So one naive way that you might imagine actually evaluating this gradient on a computer, is using the method of finite differences, going back to the limit definition of gradient. So here on the left, we imagine that our current W is this parameter vector that maybe gives us some current loss of maybe 1.25 and our goal is to compute the gradient, dW, which will be a vector of the same shape as W, and each slot in that gradient will tell us how much will the loss change is we move a tiny, infinitesimal amount in that coordinate direction. So one thing you might imagine is just computing these finite differences, that we have our W, we might try to increment the first element of W by a small value, h, and then re-compute the loss using our loss function and our classifier and all that. And maybe in this setting, if we move a little bit in the first dimension, then our loss will decrease a little bit from 1.2534 to 1.25322. And then we can use this limit definition to come up with this finite differences approximation to the gradient in this first dimension. And now you can imagine repeating this procedure in the second dimension, where now we take the first dimension, set it back to the original value, and now increment the second direction by a small step. And again, we compute the loss and use this finite differences approximation to compute an approximation to the gradient in the second slot. And now repeat this again for the third, and on and on and on. So this is actually a terrible idea because it's super slow. So you might imagine that computing this function, f, might actually be super slow if it's a large, convolutional neural network. And this parameter vector, W, probably will not have 10 entries like it does here, it might have tens of millions or even hundreds of millions for some of these large, complex deep learning models. So in practice, you'll never want to compute your gradients for your finite differences, 'cause you'd have to wait for hundreds of millions potentially of function evaluations to get a single gradient, and that would be super slow and super bad. But thankfully we don't have to do that. Hopefully you took a calculus course at some point in your lives, so you know that thanks to these guys, we can just write down the expression for our loss and then use the magical hammer of calculus to just write down an expression for what this gradient should be. And this'll be way more efficient than trying to compute it analytically via finite differences. One, it'll be exact, and two, it'll be much faster since we just need to compute this single expression. So what this would look like is now, if we go back to this picture of our current W, rather than iterating over all the dimensions of W, we'll figure out ahead of time what is the analytic expression for the gradient, and then just write it down and go directly from the W and compute the dW or the gradient in one step. And that will be much better in practice. So in summary, this numerical gradient is something that's simple and makes sense, but you won't really use it in practice. In practice, you'll always take an analytic gradient and use that when actually performing these gradient computations. However, one interesting note is that these numeric gradients are actually a very useful debugging tool. Say you've written some code, and you wrote some code that computes the loss and the gradient of the loss, then how do you debug this thing? How do you make sure that this analytic expression that you derived and wrote down in code is actually correct? So a really common debugging strategy for these things is to use the numeric gradient as a way, as sort of a unit test to make sure that your analytic gradient was correct. Again, because this is super slow and inexact, then when doing this numeric gradient checking, as it's called, you'll tend to scale down the parameter of the problem so that it actually runs in a reasonable amount of time. But this ends up being a super useful debugging strategy when you're writing your own gradient computations. So this is actually very commonly used in practice, and you'll do this on your assignments as well. So then once we know how to compute the gradient, then it leads us to this super simple algorithm that's like three lines, but turns out to be at the heart of how we train even these very biggest, most complex deep learning algorithms, and that's gradient descent. So gradient descent is first we initialize our W as some random thing, then while true, we'll compute our loss and our gradient and then we'll update our weights in the opposite of the gradient direction, 'cause remember that the gradient was pointing in the direction of greatest increase of the function, so minus gradient points in the direction of greatest decrease, so we'll take a small step in the direction of minus gradient, and just repeat this forever and eventually your network will converge and you'll be very happy, hopefully. But this step size is actually a hyper-parameter, and this tells us that every time we compute the gradient, how far do we step in that direction. And this step size, also sometimes called a learning rate, is probably one of the single most important hyper-parameters that you need to set when you're actually training these things in practice. Actually for me when I'm training these things, trying to figure out this step size or this learning rate, is the first hyper-parameter that I always check. Things like model size or regularization strength I leave until a little bit later, and getting the learning rate or the step size correct is the first thing that I try to set at the beginning. So pictorially what this looks like here's a simple example in two dimensions. So here we've got maybe this bowl that's showing our loss function where this red region in the center is this region of low loss we want to get to and these blue and green regions towards the edge are higher loss that we want to avoid. So now we're going to start of our W at some random point in the space, and then we'll compute the negative gradient direction, which will hopefully point us in the direction of the minima eventually. And if we repeat this over and over again, we'll hopefully eventually get to the exact minima. And what this looks like in practice is, oh man, we've got this mouse problem again. So what this looks like in practice is that if we repeat this thing over and over again, then we will start off at some point and eventually, taking tiny gradient steps each time, you'll see that the parameter will arc in toward the center, this region of minima, and that's really what you want, because you want to get to low loss. And by the way, as a bit of a teaser, we saw in the previous slide, this example of very simple gradient descent, where at every step, we're just stepping in the direction of the gradient. But in practice, over the next couple of lectures, we'll see that there are slightly fancier step, what they call these update rules, where you can take slightly fancier things to incorporate gradients across multiple time steps and stuff like that, that tend to work a little bit better in practice and are used much more commonly than this vanilla gradient descent when training these things in practice. And then, as a bit of a preview, we can look at some of these slightly fancier methods on optimizing the same problem. So again, the black will be this same gradient computation, and these, I forgot which color they are, but these two other curves are using slightly fancier update rules to decide exactly how to use the gradient information to make our next step. So one of these is gradient descent with momentum, the other is this Adam optimizer, and we'll see more details about those later in the course. But the idea is that we have this very basic algorithm called gradient descent, where we use the gradient at every time step to determine where to step next, and there exist different update rules which tell us how exactly do we use that gradient information. But it's all the same basic algorithm of trying to go downhill at every time step. But there's actually one more little wrinkle that we should talk about. So remember that we defined our loss function, we defined a loss that computes how bad is our classifier doing at any single training example, and then we said that our full loss over the data set was going to be the average loss across the entire training set. But in practice, this N could be very very large. If we're using the image net data set for example, that we talked about in the first lecture, then N could be like 1.3 million, so actually computing this loss could be actually very expensive and require computing perhaps millions of evaluations of this function. So that could be really slow. And actually, because the gradient is a linear operator, when you actually try to compute the gradient of this expression, you see that the gradient of our loss is now the sum of the gradient of the losses for each of the individual terms. So now if we want to compute the gradient again, it sort of requires us to iterate over the entire training data set all N of these examples. So if our N was like a million, this would be super super slow, and we would have to wait a very very long time before we make any individual update to W. So in practice, we tend to use what is called stochastic gradient descent, where rather than computing the loss and gradient over the entire training set, instead at every iteration, we sample some small set of training examples, called a minibatch. Usually this is a power of two by convention, like 32, 64, 128 are common numbers, and then we'll use this small minibatch to compute an estimate of the full sum, and an estimate of the true gradient. And now this is stochastic because you can view this as maybe a Monte Carlo estimate of some expectation of the true value. So now this makes our algorithm slightly fancier, but it's still only four lines. So now it's well true, sample some random minibatch of data, evaluate your loss and gradient on the minibatch, and now make an update on your parameters based on this estimate of the loss, and this estimate of the gradient. And again, we'll see slightly fancier update rules of exactly how to integrate multiple gradients over time, but this is the basic training algorithm that we use for pretty much all deep neural networks in practice. So we have another interactive web demo actually playing around with linear classifiers, and training these things via stochastic gradient descent, but given how miserable the web demo was last time, I'm not actually going to open the link. Instead, I'll just play this video. [laughing] But I encourage you to go check this out and play with it online, because it actually helps to build some intuition about linear classifiers and training them via gradient descent. So here you can see on the left, we've got this problem where we're categorizing three different classes, and we've got these green, blue and red points that are our training samples from these three classes. And now we've drawn the decision boundaries for these classes, which are the colored background regions, as well as these directions, giving you the direction of increase for the class scores for each of these three classes. And now if you see, if you actually go and play with this thing online, you can see that we can go in and adjust the Ws and changing the values of the Ws will cause these decision boundaries to rotate. If you change the biases, then the decision boundaries will not rotate, but will instead move side to side or up and down. Then we can actually make steps that are trying to update this loss, or you can change the step size with this slider. You can hit this button to actually run the thing. So now with a big step size, we're running gradient descent right now, and these decision boundaries are flipping around and trying to fit the data. So it's doing okay now, but we can actually change the loss function in real time between these different SVM formulations and the different softmax. And you can see that as you flip between these different formulations of loss functions, it's generally doing the same thing. Our decision regions are mostly in the same place, but exactly how they end up relative to each other and exactly what the trade-offs are between categorizing these different things changes a little bit. So I really encourage you to go online and play with this thing to try to get some intuition for what it actually looks like to try to train these linear classifiers via gradient descent. Now as an aside, I'd like to talk about another idea, which is that of image features. So so far we've talked about linear classifiers, which is just maybe taking our raw image pixels and then feeding the raw pixels themselves into our linear classifier. But as we talked about in the last lecture, this is maybe not such a great thing to do, because of things like multi-modality and whatnot. So in practice, actually feeding raw pixel values into linear classifiers tends to not work so well. So it was actually common before the dominance of deep neural networks, was instead to have this two-stage approach, where first, you would take your image and then compute various feature representations of that image, that are maybe computing different kinds of quantities relating to the appearance of the image, and then concatenate these different feature vectors to give you some feature representation of the image, and now this feature representation of the image would be fed into a linear classifier, rather than feeding the raw pixels themselves into the classifier. And the motivation here is that, so imagine we have a training data set on the left of these red points, and red points in the middle and blue points around that. And for this kind of data set, there's no way that we can draw a linear decision boundary to separate the red points from the blue points. And we saw more examples of this in the last lecture. But if we use a clever feature transform, in this case transforming to polar coordinates, then now after we do the feature transform, then this complex data set actually might become linearly separable, and actually could be classified correctly by a linear classifier. And the whole trick here now is to figure out what is the right feature transform that is computing the right quantities for the problem that you care about. So for images, maybe converting your pixels to polar coordinates, doesn't make sense, but you actually can try to write down feature representations of images that might make sense, and actually might help you out and might do better than putting in raw pixels into the classifier. So one example of this kind of feature representation that's super simple, is this idea of a color histogram. So you'll take maybe each pixel, you'll take this hue color spectrum and divide it into buckets and then for every pixel, you'll map it into one of those color buckets and then count up how many pixels fall into each of these different buckets. So this tells you globally what colors are in the image. Maybe if this example of a frog, this feature vector would tell us there's a lot of green stuff, and maybe not a lot of purple or red stuff. And this is kind of a simple feature vector that you might see in practice. Another common feature vector that we saw before the rise of neural networks, or before the dominance of neural networks was this histogram of oriented gradients. So remember from the first lecture, that Hubel and Wiesel found these oriented edges are really important in the human visual system, and this histogram of oriented gradients feature representation tries to capture the same intuition and measure the local orientation of edges on the image. So what this thing is going to do, is take our image and then divide it into these little eight by eight pixel regions. And then within each of those eight by eight pixel regions, we'll compute what is the dominant edge direction of each pixel, quantize those edge directions into several buckets and then within each of those regions, compute a histogram over these different edge orientations. And now your full-feature vector will be these different bucketed histograms of edge orientations across all the different eight by eight regions in the image. So this is in some sense dual to the color histogram classifier that we saw before. So color histogram is saying, globally, what colors exist in the image, and this is saying, overall, what types of edge information exist in the image. And even localized to different parts of the image, what types of edges exist in different regions. So maybe for this frog on the left, you can see he's sitting on a leaf, and these leaves have these dominant diagonal edges, and if you visualize the histogram of oriented gradient features, then you can see that in this region, we've got a lot of diagonal edges, that this histogram of oriented gradient feature representation's capturing. So this was a super common feature representation and was used a lot for object recognition actually not too long ago. Another feature representation that you might see out there is this idea of bag of words. So this is taking inspiration from natural language processing. So if you've got a paragraph, then a way that you might represent a paragraph by a feature vector is counting up the occurrences of different words in that paragraph. So we want to take that intuition and apply it to images in some way. But the problem is that there's no really simple, straightforward analogy of words to images, so we need to define our own vocabulary of visual words. So we take this two-stage approach, where first we'll get a bunch of images, sample a whole bunch of tiny random crops from those images and then cluster them using something like K means to come up with these different cluster centers that are maybe representing different types of visual words in the images. So if you look at this example on the right here, this is a real example of clustering actually different image patches from images, and you can see that after this clustering step, our visual words capture these different colors, like red and blue and yellow, as well as these different types of oriented edges in different directions, which is interesting that now we're starting to see these oriented edges come out from the data in a data-driven way. And now, once we've got these set of visual words, also called a codebook, then we can encode our image by trying to say, for each of these visual words, how much does this visual word occur in the image? And now this gives us, again, some slightly different information about what is the visual appearance of this image. And actually this is a type of feature representation that Fei-Fei worked on when she was a grad student, so this is something that you saw in practice not too long ago. So then as a bit of teaser, tying this all back together, the way that this image classification pipeline might have looked like, maybe about five to 10 years ago, would be that you would take your image, and then compute these different feature representations of your image, things like bag of words, or histogram of orientated gradients, concatenate a whole bunch of features together, and then feed these feature extractors down into some linear classifier. I'm simplifying a little bit, the pipelines were a little bit more complex than that, but this is the general intuition. And then the idea here was that after you extracted these features, this feature extractor would be a fixed block that would not be updated during training. And during training, you would only update the linear classifier if it's working on top of features. And actually, I would argue that once we move to convolutional neural networks, and these deep neural networks, it actually doesn't look that different. The only difference is that rather than writing down the features ahead of time, we're going to learn the features directly from the data. So we'll take our raw pixels and feed them into this to convolutional network, which will end up computing through many different layers some type of feature representation driven by the data, and then we'll actually train this entire weights for this entire network, rather than just the weights of linear classifier on top. So, next time we'll really start diving into this idea in more detail, and we'll introduce some neural networks, and start talking about backpropagation as well.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_14_Deep_Reinforcement_Learning.txt
- Okay let's get started. Alright, so welcome to lecture 14, and today we'll be talking about reinforcement learning. So some administrative details first, update on grades. Midterm grades were released last night, so see Piazza for more information and statistics about that. And we also have A2 and milestone grades scheduled for later this week. Also, about your projects, all teams must register your projects. So on Piazza we have a form posted, so you should go there and this is required, every team should go and fill out this form with information about your project, that we'll use for final grading and the poster session. And the Tiny ImageNet evaluation servers are also now online for those of you who are doing the Tiny ImageNet challenge. We also have a link to a course survey on Piazza that was released a few days ago, so, please fill it out if you guys haven't already. We'd love to have your feedback and know how we can improve this class. Okay, so the topic of today, reinforcement learning. Alright, so so far we've talked about supervised learning, which is about a type of problem where we have data x and then we have labels y and our goal is to learn a function that is mapping from x to y. So, for example, the classification problem that we've been working with. We also talked last lecture about unsupervised learning, which is the problem where we have just data and no labels, and our goal is to learn some underlying, hidden structure of the data. So, an example of this is the generative models that we talked about last lecture. And so today we're going to talk about a different kind of problem set-up, the reinforcement learning problem. And so here we have an agent that can take actions in its environment, and it can receive rewards for for its action. And its goal is going to be to learn how to take actions in a way that can maximize its reward. And so we'll talk about this in a lot more detail today. So, the outline for today, we're going to first talk about the reinforcement learning problem, and then we'll talk about Markov decision processes, which is a formalism of the reinforcement learning problem, and then we'll talk about two major classes of RL algorithms, Q-learning and policy gradients. So, in the reinforcement learning set up, what we have is we have an agent and we have an environment. And so the environment gives the agent a state. In turn, the agent is going to take an action, and then the environment is going to give back a reward, as well as the next state. And so this is going to keep going on in this loop, on and on, until the environment gives back a terminal state, which then ends the episode. So, let's see some examples of this. First we have here the cart-pole problem, which is a classic problem that some of you may have seen, in, for example, 229 before. And so this objective here is that you want to balance a pole on top of a movable cart. Alright, so the state that you have here is your current description of the system. So, for example, angular, angular speed of your pole, your position, and the horizontal velocity of your cart. And the actions you can take are horizontal forces that you apply onto the cart, right? So you're basically trying to move this cart around to try and balance this pole on top of it. And the reward that you're getting from this environment is one at each time step if your pole is upright. So you basically want to keep this pole balanced for as long as you can. Okay, so here's another example of a classic RL problem. Here is robot locomotion. So we have here an example of a humanoid robot, as well as an ant robot model. And our objective here is to make the robot move forward. And so the state that we have describing our system is the angle and the positions of all the joints of our robots. And then the actions that we can take are the torques applied onto these joints, right, and so these are trying to make the robot move forward and then the reward that we get is our forward movement as well as, I think, in the time of, in the case of the humanoid, also, you can have something like a reward of one for each time step that this robot is upright. So, games are also a big class of problems that can be formulated with RL. So, for example, here we have Atari games which are a classic success of deep reinforcement learning and so here the objective is to complete these games with the highest possible score, right. So, your agent is basically a player that's trying to play these games. And the state that you have is going to be the raw pixels of the game state. Right, so these are just the pixels on the screen that you would see as you're playing the game. And then the actions that you have are your game controls, so for example, in some games maybe moving left to right, up or down. And then the score that you have is your score increase or decrease at each time step, and your goal is going to be to maximize your total score over the course of the game. And, finally, here we have another example of a game here. It's Go, which is something that was a huge achievement of deep reinforcement learning last year, when Deep Minds AlphaGo beats Lee Sedol, which is one of the best Go players of the last few years, and this is actually in the news again for, as some of you may have seen, there's another Go competition going on now with AlphaGo versus a top-ranked Go player. And so the objective here is to win the game, and our state is the position of all the pieces, the action is where to put the next piece down, and the reward is, one, if you win at the end of the game, and zero otherwise. And we'll also talk about this one in a little bit more detail, later. Okay, so how can we mathematically formalize the RL problem, right? This loop that we talked about earlier, of environments giving agents states, and then agents taking actions. So, a Markov decision process is the mathematical formulation of the RL problem, and an MDP satisfies the Markov property, which is that the current state completely characterizes the state of the world. And an MDP here is defined by tuple of objects, consisting of S, which is the set of possible states. We have A, our set of possible actions, we also have R, our distribution of our reward, given a state, action pair, so it's a function mapping from state action to your reward. You also have P, which is a transition probability distribution over your next state, that you're going to transition to given your state, action pair. And then finally we have a Gamma, a discount factor, which is basically saying how much we value rewards coming up soon versus later on. So, the way the Markov Decision Process works is that at our initial time step t equals zero, the environment is going to sample some initial state as zero, from the initial state distribution, p of s zero. And then, once it has that, then from time t equals zero until it's done, we're going to iterate through this loop where the agent is going to select an action, a sub t. The environment is going to sample a reward from here, so reward given your state and the action that you just took. It's also going to sample the next state, at time t plus one, given your probability distribution and then the agent is going to receive the reward, as well as the next state, and then we're going to through this process again, and keep looping; agent will select the next action, and so on until the episode is over. Okay, so now based on this, we can define a policy pi, which is a function from your states to your actions that specifies what action to take in each state. And this can be either deterministic or stochastic. And our objective now is to going to be to find your optimal policy pi star, that maximizes your cumulative discounted reward. So we can see here we have our some of our future rewards, which can be also discounted by your discount factor. So, let's look at an example of a simple MDP. And here we have Grid World, which is this task where we have this grid of states. So you can be in any of these cells of your grid, which are your states. And you can take actions from your states, and so these actions are going to be simple movements, moving to your right, to your left, up or down. And you're going to get a negative reward for each transition or each time step, basically, that happens. Each movement that you take, and this can be something like R equals negative one. And so your objective is going to be to reach one of the terminal states, which are the gray states shown here, in the least number of actions. Right, so the longer that you take to reach your terminal state, you're going to keep accumulating these negative rewards. Okay, so if you look at a random policy here, a random policy would consist of, basically, at any given state or cell that you're in just sampling randomly which direction that you're going to move in next. Right, so all of these have equal probability. On the other hand, an optimal policy that we would like to have is basically taking the action, the direction that will move us closest to a terminal state. So you can see here, if we're right next to one of the terminal states we should always move in the direction that gets us to this terminal state. And otherwise, if you're in one of these other states, you want to take the direction that will take you closest to one of these states. Okay, so now given this description of our MDP, what we want to do is we want to find our optimal policy pi star. Right, our policy that's maximizing the sum of the rewards. And so this optimal policy is going to tell us, given any state that we're in, what is the action that we should take in order to maximize the sum of the rewards that we'll get. And so one question is how do we handle the randomness in the MDP, right? We have randomness in terms of our initial state that we're sampling, in therms of this transition probability distribution that will give us distribution of our next states, and so on. Also what we'll do is we'll work, then, with maximizing our expected sum of the rewards. So, formally, we can write our optimal policy pi star as maximizing this expected sum of future rewards over policy's pi, where we have our initial state sampled from our state distribution. We have our actions, sampled from our policy, given the state. And then we have our next states sampled from our transition probability distributions. Okay, so before we talk about exactly how we're going to find this policy, let's first talk about a few definitions that's going to be helpful for us in doing so. So, specifically, the value function and the Q-value function. So, as we follow the policy, we're going to sample trajectories or paths, right, for every episode. And we're going to have our initial state as zero, a-zero, r-zero, s-one, a-one, r-one, and so on. We're going to have this trajectory of states, actions, and rewards that we get. And so, how good is a state that we're currently in? Well, the value function at any state s, is the expected cumulative reward following the policy from state s, from here on out. Right, so it's going to be expected value of our expected cumulative reward, starting from our current state. And then how good is a state, action pair? So how good is taking action a in state s? And we define this using a Q-value function, which is, the expected cumulative reward from taking action a in state s and then following the policy. Right, so then, the optimal Q-value function that we can get is going to be Q star, which is the maximum expected cumulative reward that we can get from a given state action pair, defined here. So now we're going to see one important thing in reinforcement learning, which is called the Bellman equation. So let's consider this a Q-value function from the optimal policy Q star, which is then going to satisfy this Bellman equation, which is this identity shown here, and what this means is that given any state, action pair, s and a, the value of this pair is going to be the reward that you're going to get, r, plus the value of whatever state that you end up in. So, let's say, s prime. And since we know that we have the optimal policy, then we also know that we're going to play the best action that we can, right, at our state s prime. And so then, the value at state s prime is just going to be the maximum over our actions, a prime, of Q star at s prime, a prime. And so then we get this identity here, for optimal Q-value. Right, and then also, as always, we have this expectation here, because we have randomness over what state that we're going to end up in. And then we can also infer, from here, that our optimal policy, right, is going to consist of taking the best action in any state, as specified by Q star. Q star is going to tell us of the maximum future reward that we can get from any of our actions, so we should just take a policy that's following this and just taking the action that's going to lead to best reward. Okay, so how can we solve for this optimal policy? So, one way we can solve for this is something called a value iteration algorithm, where we're going to use this Bellman equation as an iterative update. So at each step, we're going to refine our approximation of Q star by trying to enforce the Bellman equation. And so, under some mathematical conditions, we also know that this sequence Q, i of our Q-function is going to converge to our optimal Q star as i approaches infinity. And so this, this works well, but what's the problem with this? Well, an important problem is that this is not scalable. Right? We have to compute Q of s, a here for every state, action pair in order to make our iterative updates. Right, but then this is a problem if, for example, if we look at these the state of, for example, an Atari game that we had earlier, it's going to be your screen of pixels. And this is a huge state space, and it's basically computationally infeasible to compute this for the entire state space. Okay, so what's the solution to this? Well, we can use a function approximator to estimate Q of s, a so, for example, a neural network, right. So, we've seen before that any time, if we have some really complex function that don't know, that we want to estimate, a neural network is a good way to estimate this. Okay, so this is going to take us to our formulation of Q-learning that we're going to look at. And so, what we're going to do is we're going to use a function approximator in order to estimate our action value function. Right? And if this function approximator is a deep neural network, which is what's been used recently, then this is going to be called deep Q-learning. And so this is something that you'll hear around as one of the common approaches to deep reinforcement learning that's in use. Right, and so in this case, we also have our function parameters theta here, so our Q-value function is determined by these weights, theta, of our neural network. Okay, so given this function approximation, how do we solve for our optimal policy? So remember that we want to find a Q-function that's satisfying the Bellman equation. Right, and so we want to enforce this Bellman equation to happen, so what we can do when we have this neural network approximating our Q-function is that we can train this where our loss function is going to try and minimize the error of our Bellman equation, right? Or how far q of s, a is from its target, which is the Y_i here, the right hand side of the Bellman equation that we saw earlier. So, we're basically going to take these forward passes of our loss function, trying to minimize this error and then our backward pass, our gradient update, is just going to be you just take the gradient of this loss, with respect to our network parameter's theta. Right, and so our goal is again to have this effect as we're taking gradient steps of iteratively trying to make our Q-function closer to our target value. So, any questions about this? Okay. So let's look at a case study of an example where one of the classic examples of deep reinforcement learning where this approach was applied. And so we're going to look at this problem that we saw earlier of playing Atari games, where our objective was to complete the game with the highest score and remember our state is going to be the raw pixel inputs of the game state, and we can take these actions of moving left, right, up, down, or whatever actions of the particular game. And our reward at each time step, we're going to get a reward of our score increase or decrease that we got at this time step, and so our cumulative total reward is this total reward that we'll usually see at the top of the screen. Okay, so the network that we're going to use for our Q-function is going to look something like this, right, where we have our Q-network, with weight's theta. And then our input, our state s, is going to be our current game screen. And in practice we're going to take a stack of the last four frames, so we have some history. And so we'll take these raw pixel values, we'll do some, you know, RGB to gray-scale conversions, some down-sampling, some cropping, so, some pre-processing. And what we'll get out of this is this 84 by 84 by four stack of the last four frames. Yeah, question. [inaudible question from audience] Okay, so the question is, are we saying here that our network is going to approximate our Q-value function for different state, action pairs, for example, four of these? Yeah, that's correct. We'll see, we'll talk about that in a few slides. [inaudible question from audience] So, no. So, we don't have a Softmax layer after the connected, because here our goal is to directly predict our Q-value functions. [inaudible question from audience] Q-values. [inaudible question from audience] Yes, so it's more doing regression to our Q-values. Okay, so we have our input to this network and then on top of this, we're going to have a couple of familiar convolutional layers, and a fully-connected layer, so here we have an eight-by-eight convolutions and we have some four-by-four convolutions. Then we have a FC 256 layer, so this is just a standard kind of networK that you've seen before. And then, finally, our last fully-connected layer has a vector of outputs, which is corresponding to your Q-value for each action, right, given the state that you've input. And so, for example, if you have four actions, then here we have this four-dimensional output corresponding to Q of current s, as well as a-one, and then a-two, a-three, and a-four. Right so this is going to be one scalar value for each of our actions. And then the number of actions that we have can vary between, for example, 4 to 18, depending on the Atari game. And one nice thing here is that using this network structure, a single feedforward pass is able to compute the Q-values for all functions from the current state. And so this is really efficient. Right, so basically we take our current state in and then because we have this output of an action for each, or Q-value for each action, as our output layer, we're able to do one pass and get all of these values out. And then in order to train this, we're just going to use our loss function from before. Remember, we're trying to enforce this Bellman equation and so, on our forward pass, our loss function we're going to try and iteratively make our Q-value close to our target value, that it should have. And then our backward pass is just directly taking the gradient of this loss function that we have and then taking a gradient step based on that. So one other thing that's used here that I want to mention is something called experience replay. And so this addresses a problem with just using the plain two network that I just described, which is that learning from batches of consecutive samples is bad. And so the reason because of this, right, is so for just playing the game, taking samples of state action rewards that we have and just taking consecutive samples of these and training with these, well all of these samples are correlated and so this leads to inefficient learning, first of all, and also, because of this, our current Q-network parameters, right, this determines the policy that we're going to follow, it determines our next samples that we're going to get that we're going to use for training. And so this leads to problems where you can have bad feedback loops. So, for example, if currently the maximizing action that's going to take left, well this is going to bias all of my upcoming training examples to be dominated by samples from the left-hand side. And so this is a problem, right? And so the way that we are going to address these problems is by using something called experience replay, where we're going to keep this replay memory table of transitions of state, as state, action, reward, next state, transitions that we have, and we're going to continuously update this table with new transitions that we're getting as game episodes are played, as we're getting more experience. Right, and so now what we can do is that we can now train our Q-network on random, mini-batches of transitions from the replay memory. Right, so instead of using consecutive samples, we're now going to sample across these transitions that we've accumulated random samples of these, and this breaks all of the, these correlation problems that we had earlier. And then also, as another side benefit is that each of these transitions can also contribute to potentially multiple weight updates. We're just sampling from this table and so we could sample one multiple times. And so, this is going to lead also to greater data efficiency. Okay, so let's put this all together and let's look at the full algorithm for deep Q-learning with experience replay. So we're going to start off with initializing our replay memory to some capacity that we choose, N, and then we're also going to initialize our Q-network, just with our random weights or initial weights. And then we're going to play M episodes, or full games. This is going to be our training episodes. And then what we're going to do is we're going to initialize our state, using the starting game screen pixels at the beginning of each episode. And remember, we go through the pre-processing step to get to our actual input state. And then for each time step of a game that we're currently playing, we're going to, with a small probability, select a random action, so one thing that's important in these algorithms is to have sufficient exploration, so we want to make sure that we are sampling different parts of the state space. And then otherwise, we're going to select from the greedy action from the current policy. Right, so most of the time we'll take the greedy action that we think is a good policy of the type of actions that we want to take and states that we want to see, and with a small probability we'll sample something random. Okay, so then we'll take this action, a, t, and we'll observe the next reward and the next state. So r, t and s, t plus one. And then we'll take this and we'll store this transition in our replay memory that we're building up. And then we're going to take, we're going to train a network a little bit. So we're going to do experience replay and we'll take a sample of a random mini-batches of transitions that we have from the replay memory, and then we'll perform a gradient descent step on this. Right, so this is going to be our full training loop. We're going to be continuously playing this game and then also sampling minibatches, using experienced replay to update our weights of our Q-network and then continuing in this fashion. Okay, so let's see. Let's see if I can, is this playing? Okay, so let's take a look at this deep Q-learning algorithm from Google DeepMind, trained on an Atari game of Breakout. Alright, so it's saying here that our input is just going to be our state are raw game pixels. And so here we're looking at what's happening at the beginning of training. So we've just started training a bit. And right, so it's going to look to it's learned to kind of hit the ball, but it's not doing a very good job of sustaining it. But it is looking for the ball. Okay, so now after some more training, it looks like a couple hours. Okay, so now it's learning to do a pretty good job here. So it's able to continuously follow this ball and be able to to remove most of the blocks. Right, so after 240 minutes. Okay, so here it's found the pro strategy, right? You want to get all the way to the top and then have it go by itself. Okay, so this is an example of using deep Q-learning in order to train an agent to be able to play Atari games. It's able to do this on many Atari games and so you can check out some more of this online. Okay, so we've talked about Q-learning. But there is a problem with Q-learning, right? It can be challenging and what's the problem? Well, the problem can be that the Q-function is very complicated. Right, so we have to, we're saying that we want to learn the value of every state action pair. So, if, let's say you have something, for example, a robot grasping, wanting to grasp an object. Right, you're going to have a really high dimensional state. You have, I mean, let's say you have all of your even just joint, joint positions, and angles. Right, and so learning the exact value of every state action pair that you have, right, can be really, really hard to do. But on the other hand, your policy can be much simpler. Right, like what you want this robot to do maybe just to have this simple motion of just closing your hand, right? Just, move your fingers in this particular direction and keep going. And so, that leads to the question of can we just learn this policy directly? Right, is it possible, maybe, to just find the best policy from a collection of policies, without trying to go through this process of estimating your Q-value and then using that to infer your policy. So, this is an approach that oh, so, okay, this is an approach that we're going to call policy gradients. And so, formally, let's define a class of parametrized policies. Parametrized by weights theta, and so for each policy let's define the value of the policy. So, J, our value J, given parameters theta, is going to be, or expected some cumulative sum of future rewards that we care about. So, the same reward that we've been using. And so our goal then, under this setup is that we want to find an optimal policy, theta star, which is the maximum, right, arg max over theta of J of theta. So we want to find the policy, the policy parameters that gives our best expected reward. So, how can we do this? Any ideas? Okay, well, what we can do is just a gradient assent on our policy parameters, right? We've learned that given some objective that we have, some parameters we can just use gradient asscent and gradient assent in order to continuously improve our parameters. And so let's talk more specifically about how we can do this, which we're going to call here the reinforce algorithm. So, mathematically, we can write out our expected future reward over trajectories, and so we're going to sample these trajectories of experience, right, like for example episodes of game play that we talked about earlier. S-zero, a-zero, r-zero, s-one, a-one, r-one, and so on. Using some policy pi of theta. Right, and then so, for each trajectory we can compute a reward for that trajectory. It's the cumulative reward that we got from following this trajectory. And then the value of a policy, pi sub theta, is going to be just the expected reward of these trajectories that we can get from the following pi sub theta. So that's here, this expectation over trajectories that we can get, sampling trajectories from our policy. Okay. So, we want to do gradient ascent, right? So let's differentiate this. Once we differentiate this, then we can just take gradient steps, like normal. So, the problem is that now if we try and just differentiate this exactly, this is intractable, right? So, the gradient of an expectation is problematic when p is dependent on theta here, because here we want to take this gradient of p of tau, given theta, but this is going to be, we want to take this integral over tau. Right, so this is intractable. However, we can use a trick here to get around this. And this trick is taking this gradient that we want, of p. We can rewrite this by just multiplying this by one, by multiplying top and bottom, both by p of tau given theta. Right, and then if we look at these terms that we have now here, in the way that I've written this, on the left and the right, this is actually going to be equivalent to p of tau times our gradient with respect to theta, of log, of p. Right, because the gradient of the log of p is just going to be one over p times gradient of p. Okay, so if we then inject this back into our expression that we had earlier for this gradient, we can see that, what this will actually look like, right, because now we have a gradient of log p times our probabilities of all of these trajectories and then taking this integral here, over tau. This is now going to be an expectation over our trajectories tau, and so what we've done here is that we've taken a gradient of an expectation and we've transformed it into an expectation of gradients. Right, and so now we can use sample trajectories that we can get in order to estimate our gradient. And so we do this using Monte Carlo sampling, and this is one of the core ideas of reinforce. Okay, so looking at this expression that we want to compute, can we compute these quantities that we had here without knowing the transition probabilities? Alright, so we have that p of tau is going to be the probability of a trajectory. It's going to be the product of all of our transition probabilities of the next state that we get, given our current state and action as well as our probability of the actions that we've taken under our policy pi. Right, so we're going to multiply all of these together, and get our probability of our trajectory. So this log of p of tau that we want to compute is going to be we just take this log and this will separate this out into a sum of pushing the logs inside. And then here, when we differentiate this, we can see we want to differentiate with respect to theta, but this first term that we have here, log p of the state transition probabilities there's no theta term here, and so the only place where we have theta is the second term that we have, of log of pi sub theta, of our action, given our state, and so this is the only term that we keep in our gradient estimate, and so we can see here that this doesn't depend on the transition probabilities, right, so we actually don't need to know our transition probabilities in order to computer our gradient estimate. And then, so, therefore when we're sampling these, for any given trajectory tau, we can estimate J of theta using this gradient estimate. This is here shown for a single trajectory from what we had earlier, and then we can also sample over multiple trajectories to get the expectation. Okay, so given this gradient estimator that we've derived, the interpretation that we can make from this here, is that if our reward for a trajectory is high, if the reward that we got from taking the sequence of actions was good, then let's push up the probabilities of all the actions that we've seen. Right, we're just going to say that these were good actions that we took. And then if the reward is low, we want to push down these probabilities. We want to say these were bad actions, let's try and not sample these so much. Right and so we can see that's what's happening here, where we have pi of a, given s. This is the likelihood of the actions that we've taken and then we're going to scale this, we're going to take the gradient and the gradient is going to tell us how much should we change the parameters in order to increase our likelihood of our action, a, right? And then we're going to take this and scale it by how much reward we actually got from it, so how good were these actions, in reality. Okay, so this might seem simplistic to say that, you know, if a trajectory is good, then we're saying here that all of its actions were good. Right? But, in expectation, this actually averages out. So we have an unbiased estimator here, and so if you have many samples of this, then we will get an accurate estimate of our gradient. And this is nice because we can just take gradient steps and we know that we're going to be improving our loss function and getting closer to, at least some local optimum of our policy parameters theta. Alright, but there is a problem with this, and the problem is that this also suffers from high variance. Because this credit assignment is really hard. Right, we're saying that given a reward that we got, we're going to say all of the actions were good, we're just going to hope that this assignment of which actions were actually the best actions, that mattered, are going to average out over time. And so this is really hard and we need a lot of samples in order to have a good estimate. Alright, so this leads to the question of, is there anything that we can do to reduce the variance and improve the estimator? And so variance reduction is an important area of research in policy gradients, and in coming up with ways in order to improve the estimator and require fewer samples. Alright, so let's look at a couple of ideas of how we can do this. So given our gradient estimator, so the first idea is that we can push up the probabilities of an action only by it's affect on future rewards from that state, right? So, now with instead of scaling this likelihood, or pushing up this likelihood of this action by the total reward of its trajectory, let's look more specifically at just the sum of rewards coming from this time step on to the end, right? And so, this is basically saying that how good an action is, is only specified by how much future reward it generates. Which makes sense. Okay, so a second idea that we can also use is using a discount factor in order to ignore delayed effects. Alright so here we've added back in this discount factor, that we've seen before, which is saying that we are, you know, our discount factor's going to tell us how much we care about just the rewards that are coming up soon, versus rewards that came much later on. Right, so we were going to now say how good or bad an action is, looking more at the local neighborhood of action set it generates in the immediate near future and down weighting the the ones that come later on. Okay so these are some straightforward ideas that are generally used in practice. So, a third idea is this idea of using a baseline in order to reduce your variance. And so, a problem with just using the raw value of your trajectories, is that this isn't necessarily meaningful, right? So, for example, if your rewards are all positive, then you're just going to keep pushing up the probabilities of all your actions. And of course, you'll push them up to various degrees, but what's really important is whether a reward is better or worse than what you're expecting to be getting. Alright, so in order to address this, we can introduce a baseline function that's dependent on the state. Right, so this baseline function tell us what's, how much we, what's our guess and what we expect to get from this state, and then our reward or our scaling factor that we're going to use to be pushing up or down our probabilities, can now just be our expected sum of future rewards, minus this baseline, so now it's the relative of how much better or worse is the reward that we got from what we expected. And so how can we choose this baseline? Well, a very simple baseline, the most simple you can use, is just taking a moving average of rewards that you've experienced so far. So you can even do this overall trajectories, and this is just an average of what rewards have I been seeing as I've been training, and as I've been playing these episodes? Right, and so this gives some idea of whether the reward that I currently get was relatively better or worse. And so there's some variance on this that you can use but so far the variance reductions that we've seen so far are all used in what's typically called "vanilla REINFORCE" algorithm. Right, so looking at the cumulative future reward, having a discount factor, and some simple baselines. Now let's talk about how we can think about this idea of baseline and potentially choose better baselines. Right, so if we're going to think about what's a better baseline that we can choose, what we want to do is we want to push up the probability of an action from a state, if the action was better than the expected value of what we should get from that state. So, thinking about the value of what we're going to expect from the state, what does this remind you of? Does this remind you of anything that we talked about earlier in this lecture? Yes. [inaudible from audience] Yeah, so the value functions, right? The value functions that we talked about with Q-learning. So, exactly. So Q-functions and value functions and so, the intuition is that well, we're happy with an action, taking an action in a state s, if our Q-value of taking a specific action from this state is larger than the value function or expected value of the cumulative future reward that we can get from this state. Right, so this means that this action was better than other actions that we could've taken. And on the contrary, we're unhappy if this action, if this value or this difference is negative or small. Right, so now if we plug this in, in order to, as our scaling factor of how much we want to push up or down, our probabilities of our actions, then we can get this estimator here. Right, so, it's going to be exactly the same as before, but now where we've had before our cumulative expected reward, with our various reduction, variance reduction techniques and baselines in, here we can just plug in now this difference of how much better our current action was, based on our Q-function minus our value function from that state. Right, but what we talked about so far with our REINFORCE algorithm, we don't know what Q and V actually are. So can we learn these? And the answer is yes, using Q-learning. What we've already talked about before. So we can combine policy gradients while we've just been talking about, with Q-learning, by training both an actor, which is the policy, as well as a critic, right, a Q-function, which is going to tell us how good we think a state is, and an action in a state. Right, so using this in approach, an actor is going to decide which action to take and then the critic, or Q-function, is going to tell the actor how good its action was and how it should adjust. And so, and this also alleviates a little bit of the task of this critic compared to the Q-learning problems that we talked about earlier of having to have this learning a Q-value for every state, action pair, because here it only has to learn this for the state-action pairs that are generated by the policy. It only needs to know this where it matters for computing this scaling factor. Right, and then we can also, as we're learning this, incorporate all of the Q-learning tricks that we saw earlier, such as experience replay. And so, now I'm also going to just define this term that we saw earlier, Q of s of a, how much, how good was an action in a given state, minus V of s? Our expected value of how good the state is by this term advantage function. Right, so the advantage function is how much advantage did we get from playing this action? How much better the action was than expected. So, using this, we can put together our full actor-critic algorithm. And so what this looks like, is that we're going to start off with by initializing our policy parameters theta, and our critic parameters that we'll call phi. And then for each, for iterations of training, we're going to sample M trajectories, under the current policy. Right, we're going to play our policy and get these trajectories as s-zero, a-zero, r-zero, s-one and so on. Okay, and then we're going to compute the gradients that we want. Right, so for each of these trajectories and in each time step, we're going to compute this advantage function, and then we're going to use this advantage function, right? And then we're going to use that in our gradient estimator that we showed earlier, and accumulate our gradient estimate that we have for here. And then we're also going to train our critic parameters phi by exactly the same way, so as we saw earlier, basically trying to enforce this value function, right, to learn our value function, which is going to be pulled into, just minimizing this advantage function and this will encourage it to be closer to this Bellman equation that we saw earlier, right? And so, this is basically just iterating between learning and optimizing our policy function, as well as our critic function. And so then we're going to update the gradients and then we're going to go through and just continuously repeat this process. Okay, so now let's look at some examples of REINFORCE in action, and let's look first here at something called the Recurrent Attention Model, which is something that, which is a model also referred to as hard attention, but you'll see a lot in, recently, in computer vision tasks for various purposes. Right, and so the idea behind this is here, I've talked about the original work on hard attention, which is on image classification, and your goal is to still predict the image class, but now you're going to do this by taking a sequence of glimpses around the image. You're going to look at local regions around the image and you're basically going to selectively focus on these parts and build up information as you're looking around. Right, and so the reason that we want to do this is, well, first of all it has some nice inspiration from human perception in eye movement. Let's say we're looking at a complex image and we want to determine what's in the image. Well, you know, we might, maybe look at a low-resolution of it first, and then look specifically at parts of the image that will give us clues about what's in this image. And then, this approach of just looking at, looking around at an image at local regions, is also going to help you save computational resources, right? You don't need to process the full image. In practice, what usually happens is you look at a low-resolution image first, of a full image, to decide how to get started, and then you look at high-res portions of the image after that. And so this saves a lot of computational resources and you can think about, then, benefits of this to scalability, right, being able to, let's say process larger images more efficiently. And then, finally, this could also actually help with actual classification performance, because now you're able to ignore clutter and irrelevant parts of the image. Right? Like, you know, instead of always putting through your ConvNet, all the parts of your image, you can use this to, maybe, first prune out what are the relevant parts that I actually want to process, using my ConvNet. Okay, so what's the reinforcement learning formulation of this problem? Well, our state is going to be the glimpses that we've seen so far, right? Our what's the information that we've seen? Our action is then going to be where to look next in the image. Right, so in practice, this can be something like the x, y-coordinates, maybe centered around some fixed-sized glimpse that you want to look at next. And then the reward for the classification problem is going to be one, at the final time step, if our image is correctly classified, and zero otherwise. And so, because this glimpsing, taking these glimpses around the image is a non-differentiable operation, this is why we need to use reinforcement learning formulation, and learn policies for how to take these glimpse actions and we can train this using REINFORCE. So, given the state of glimpses so far, the core of our model is going to be this RNN that we're going to use to model the state, and then we're going to use our policy parameters in order to output the next action. Okay, so what this model looks like is we're going to take an input image. Right, and then we're going to take a glimpse at this image. So here, this glimpse is the red box here, and this is all blank, zeroes. And so we'll pass what we see so far into some neural network, and this can be any kind of network depending on your task. In the original experiments that I'm showing here, on MNIST, this is very simple, so you can just use a couple of small, fully-connected layers, but you can imagine for more complex images and other tasks you may want to use fancier ConvNets. Right, so you've passed this into some neural network, and then, remember I said we're also going to be integrating our state of, glimpses that we've seen so far, using a recurrent network. So, I'm just going to we'll see that later on, but this is going to go through that, and then it's going to output my x, y-coordinates, of where I'm going to see next. And in practice, this is going to be We want to output a distribution over actions, right, and so, what this is going to be it's going to be a gaussian distribution and we're going to output the mean. You can also output a mean and variance of this distribution in practice. The variance can also be fixed. Okay, so we're going to take this action that we're now going to sample a specific x, y location from our action distribution and then we're going to put this in to get the next, extract the next glimpse from our image. Right, so here we've moved to the end of the two, this tail part of the two. And so now we're actually starting to get some signal of what we want to see, right? Like, what we want to do is we want to look at the relevant parts of the image that are useful for classification. So we pass this through, again, our neural network layers, and then also through our recurrent network, right, that's taking this input as well as this previous hidden state, and we're going to use this to get a, so this is representing our policy, and then we're going to use this to output our distribution for the next location that we want to glimpse at. So we can continue doing this, you can see in this next glimpse here, we've moved a little bit more toward the center of the two. Alright, so it's probably learning that, you know, once I've seen this tail part of the two, that looks like this, maybe moving in this upper left-hand direction will get you more towards a center, which will also have a value, valuable information. And then we can keep doing this. And then finally, at the end, at our last time step, so we can have a fixed number of time steps here, in practice something like six or eight. And then at the final time step, since we want to do classification, we'll have our standard Softmax layer that will produce a distribution of probabilities for each class. And then here the max class was a two, so we can predict that this was a two. Right, and so this is going to be the set up of our model and our policy, and then we have our estimate for the gradient of this policy that we've said earlier we could compute by taking trajectories from here and using those to do back prop. And so we can just do this in order to train this model and learn the parameters of our policy, right? All of the weights that you can see here. Okay, so here's an example of a policies trained on MNIST, and so you can see that, in general, from wherever it's starting, usually learns to go closer to where the digit is, and then looking at the relevant parts of the digit, right? So this is pretty cool and this you know, follows kind of what you would expect, right, if you were to choose places to look next in order to most efficiently determine what digit this is. Right, and so this idea of hard attention, of recurrent attention models, has also been used in a lot of tasks in computer vision in the last couple of years, so you'll see this, used, for example, fine-grained image recognition. So, I mentioned earlier that one of the useful benefits of this can be also to both save on computational efficiency as well as to ignore clutter and irrelevant parts of the image, and when you have fine-grained image classification problems, you usually want both of these. You want to keep high-resolution, so that you can look at, you know, important differences. And then you also want to focus on these differences and ignore irrelevant parts. Yeah, question. [inaudible question from audience] Okay, so yeah, so the question is how is there is computational efficiency, because we also have this recurrent neural network in place. So that's true, it depends on exactly what's your, what is your problem, what is your network, and so on, but you can imagine that if you had some really hi- resolution image and you don't want to process the entire parts of this image with some huge ConvNet or some huge, you know, network, now you can get some savings by just focusing on specific smaller parts of the image. You only process those parts of the image. But, you're right, that it depends on exactly what problem set-up you have. This has also been used in image captioning, so if we're going to produce an caption for an image, we can choose, you know, we can have the image use this attention model to generate this caption and what it usually ends up learning is these policies where it'll focus on specific parts of the image, in sequence, and as it focuses on each part, it'll generate some words or the part of the caption referring to that part of the image. And then it's also been used, also tasks such as visual question answering, where we ask a question about the image and you want the model to output some answer to your question, for example, I don't know, how many chairs are around the table? And so you can see how this attention mechanism might be a good type of model for learning how to answer these questions. Okay, so that was an example of policy gradients in these hard attention models. And so, now I'm going to talk about one more example, that also uses policy gradients, which is learning how to play Go. Right, so DeepMind had this agent for playing Go, called AlphGo, that's been in the news a lot in the past, last year and this year. So, sorry? [inaudible comment from audience] And yesterday, yes, that's correct. So this is very exciting, recent news as well. So last year, a first version of AlphaGo was put into a competition against one of the best Go players of recent years, Lee Sedol, and the agent was able to beat him four to one, in a game of five matches. And actually, right now, just there's another match with Ke Jie, which is current world number one, and so it's best of three in China right now. And so the first game was yesterday. AlphaGo won. I think it was by just half a point, and so, so there's two more games to watch. These are all live-stream, so you guys, should also go online and watch these games. It's pretty interesting to hear the commentary. But, so what is this AlphaGo agent, right, from DeepMind? And it's based on a lot of what we've talked about so far in this lecture. And what it is it's a mixed of supervised learning and reinforcement learning, as well as a mix of some older methods for Go, Monte Carlo Tree Search, as well as recent deep RL approaches. So, okay, so how does AlphaGo beat the Go world champion? Well, what it first does is to train AlphaGo, what it takes as input is going to be a few featurization of the board. So it's basically, right, your board and the positions of the pieces on the board. That's your natural state representation. And what they do in order to improve performance a little bit is that they featurize this into some more channels of one is all the different stone colors, so this is kind of like your configuration of your board. Also some channels, for example, where, which moves are legal, some bias channels, some various things and then, given this state, right, it's going to first train a network that's initialized with supervised training from professional Go games. So, given the current board configuration or features, featurization of this, what's the correct next action to take? Alright, so given examples of professional games played, you know, just collected over time, we can just take all of these professional Go moves, train a standard, supervised mapping, from board state to action to take. Alright, so they take this, which is a pretty good start, and then they're going to use this to initialize a policy network. Right, so policy network, it's just going to take the exact same structure of input is your board state and your output is the actions that you're going to take. And this was the set-up for the policy gradients that we just saw, right? So now we're going to just continue training this using policy gradients. And it's going to do this reinforcement learning training by playing against itself for random, previous iterations. So self play, and the reward it's going to get is one, if it wins, and a negative one if it loses. And what we're also going to do is we're also going to learn a value network, so, something like a critic. And then, the final AlphaGo is going to be combining all of these together, so policy and value networks as well as with a Monte Carlo Tree Search algorithm, in order to select actions by look ahead search. Right, so after putting all this together, a value of a node, of where you are in play, and what you do next, is going to be a combination of your value function, as well as roll at outcome that you're computing from standard Monte Carlo Tree Search roll outs. Okay, so, yeah, so this is basically the various, the components of AlphaGo. If you're interested in reading more about this, there's a nature paper about this in 2016, and they trained this, I think, over, the version of AlphaGo that's being used in these matches is, like, I think a couple thousand CPUs plus a couple hundred GPUs, putting all of this together, so it's a huge amount of training that's going on, right. And yeah, so you guys should, follow the game this week. It's pretty exciting. Okay, so in summary, today we've talked about policy gradients, right, which are general. They, you're just directly taking gradient descent or ascent on your policy parameters, so this works well for a large class of problems, but it also suffers from high variance, so it requires a lot of samples, and your challenge here is sample efficiency. We also talked about Q-learning, which doesn't always work, it's harder to sometimes get it to work because of this problem that we talked earlier where you are trying to compute this exact state, action value for many, for very high dimensions, but when it does work, for problems, for example, the Atari we saw earlier, then it's usually more sample efficient than policy gradients. Right, and one of the challenges in Q-learning is that you want to make sure that you're doing sufficient exploration. Yeah? [inaudible question from audience] Oh, so for Q-learning can you do this process where you're, okay, where you're trying to start this off by some supervised training? So, I guess the direct approach for Q-learning doesn't do that because you're trying to regress to these Q-values, right, instead of policy gradients over this distribution, but I think there are ways in which you can, like, massage this type of thing to also bootstrap. Because I think bootstrapping in general or like behavior cloning is a good way to warm start these policies. Okay, so, right, so we've talked about policy gradients and Q-learning, and just another look at some of these, some of the guarantees that you have, right, with policy gradients. One thing we do know that's really nice is that this will always converge to a local minimum of J of theta, because we're just directly doing gradient ascent, and so this is often, and this local minimum is often just pretty good, right. And in Q-learning, on the other hand, we don't have any guarantees because here we're trying to approximate this Bellman equation with a complicated function approximator and so, in this case, this is the problem with Q-learning being a little bit trickier to train in terms of applicability to a wide range of problems. Alright, so today you got basically very, brief, kind of high-level overview of reinforcement learning and some major classes of algorithms in RL. And next time we're going to have a guest lecturer from, Song Han, who's done a lot of pioneering work in model compression and energy efficient deep learning, and so he's going to talk some of this, about some of this. Thank you.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_10_Recurrent_Neural_Networks.txt
- Okay. Can everyone hear me? Okay. Sorry for the delay. I had a bit of technical difficulty. Today was the first time I was trying to use my new touch bar Mac book pro for presenting, and none of the adapters are working. So, I had to switch laptops at the last minute. So, thanks. Sorry about that. So, today is lecture 10. We're talking about recurrent neural networks. So, as of, as usual, a couple administrative notes. So, We're working hard on assignment one grading. Those grades will probably be out sometime later today. Hopefully, they can get out before the A2 deadline. That's what I'm hoping for. On a related note, Assignment two is due today at 11:59 p.m. so, who's done with that already? About half you guys. So, you remember, I did warn you when the assignment went out that it was quite long, to start early. So, you were warned about that. But, hopefully, you guys have some late days left. Also, as another reminder, the midterm will be in class on Tuesday. If you kind of look around the lecture hall, there are not enough seats in this room to seat all the enrolled students in the class. So, we'll actually be having the midterm in several other lecture halls across campus. And we'll be sending out some more details on exactly where to go in the next couple of days. So a bit of a, another bit of announcement. We've been working on this sort of fun bit of extra credit thing for you to play with that we're calling the training game. This is this cool browser based experience, where you can go in and interactively train neural networks and tweak the hyper parameters during training. And this should be a really cool interactive way for you to practice some of these hyper parameter tuning skills that we've been talking about the last couple of lectures. So this is not required, but this, I think, will be a really useful experience to gain a little bit more intuition into how some of these hyper parameters work for different types of data sets in practice. So we're still working on getting all the bugs worked out of this setup, and we'll probably send out some more instructions on exactly how this will work in the next couple of days. But again, not required. But please do check it out. I think it'll be really fun and a really cool thing for you to play with. And will give you a bit of extra credit if you do some, if you end up working with this and doing a couple of runs with it. So, we'll again send out some more details about this soon once we get all the bugs worked out. As a reminder, last time we were talking about CNN Architectures. We kind of walked through the time line of some of the various winners of the image net classification challenge, kind of the breakthrough result. As we saw was the AlexNet architecture in 2012, which was a nine layer convolutional network. It did amazingly well, and it sort of kick started this whole deep learning revolution in computer vision, and kind of brought a lot of these models into the mainstream. Then we skipped ahead a couple years, and saw that in 2014 image net challenge, we had these two really interesting models, VGG and GoogLeNet, which were much deeper. So VGG was, they had a 16 and a 19 layer model, and GoogLeNet was, I believe, a 22 layer model. Although one thing that is kind of interesting about these models is that the 2014 image net challenge was right before batch normalization was invented. So at this time, before the invention of batch normalization, training these relatively deep models of roughly twenty layers was very challenging. So, in fact, both of these two models had to resort to a little bit of hackery in order to get their deep models to converge. So for VGG, they had the 16 and 19 layer models, but actually they first trained an 11 layer model, because that was what they could get to converge. And then added some extra random layers in the middle and then continued training, actually training the 16 and 19 layer models. So, managing this training process was very challenging in 2014 before the invention of batch normalization. Similarly, for GoogLeNet, we saw that GoogLeNet has these auxiliary classifiers that were stuck into lower layers of the network. And these were not really needed for the class to, to get good classification performance. This was just sort of a way to cause extra gradient to be injected directly into the lower layers of the network. And this sort of, this again was before the invention of batch normalization and now once you have these networks with batch normalization, then you no longer need these slightly ugly hacks in order to get these deeper models to converge. Then we also saw in the 2015 image net challenge was this really cool model called ResNet, these residual networks that now have these shortcut connections that actually have these little residual blocks where we're going to take our input, pass it through the residual blocks, and then add that output of the, then add our input to the block, to the output from these convolutional layers. This is kind of a funny architecture, but it actually has two really nice properties. One is that if we just set all the weights in this residual block to zero, then this block is competing the identity. So in some way, it's relatively easy for this model to learn not to use the layers that it doesn't need. In addition, it kind of adds this interpretation to L2 regularization in the context of these neural networks, cause once you put L2 regularization, remember, on your, on the weights of your network, that's going to drive all the parameters towards zero. And maybe your standard convolutional architecture is driving towards zero. Maybe it doesn't make sense. But in the context of a residual network, if you drive all the parameters towards zero, that's kind of encouraging the model to not use layers that it doesn't need, because it will just drive those, the residual blocks towards the identity, whether or not needed for classification. The other really useful property of these residual networks has to do with the gradient flow in the backward paths. If you remember what happens at these addition gates in the backward pass, when upstream gradient is coming in through an addition gate, then it will split and fork along these two different paths. So then, when upstream gradient comes in, it'll take one path through these convolutional blocks, but it will also have a direct connection of the gradient through this residual connection. So then when you look at, when you imagine stacking many of these residual blocks on top of each other, and our network ends up with hundreds of, potentially hundreds of layers. Then, these residual connections give a sort of gradient super highway for gradients to flow backward through the entire network. And this allows it to train much easier and much faster. And actually allows these things to converge reasonably well, even when the model is potentially hundreds of layers deep. And this idea of managing gradient flow in your models is actually super important everywhere in machine learning. And super prevalent in recurrent networks as well. So we'll definitely revisit this idea of gradient flow later in today's lecture. So then, we kind of also saw a couple other more exotic, more recent CNN architectures last time, including DenseNet and FractalNet, and once you think about these architectures in terms of gradient flow, they make a little bit more sense. These things like DenseNet and FractalNet are adding these additional shortcut or identity connections inside the model. And if you think about what happens in the backwards pass for these models, these additional funny topologies are basically providing direct paths for gradients to flow from the loss at the end of the network more easily into all the different layers of the network. So I think that, again, this idea of managing gradient flow properly in your CNN Architectures is something that we've really seen a lot more in the last couple of years. And will probably see more moving forward as more exotic architectures are invented. We also saw this kind of nice plot, plotting performance of the number of flops versus the number of parameters versus the run time of these various models. And there's some interesting characteristics that you can dive in and see from this plot. One idea is that VGG and AlexNet have a huge number of parameters, and these parameters actually come almost entirely from the fully connected layers of the models. So AlexNet has something like roughly 62 million parameters, and if you look at that last fully connected layer, the final fully connected layer in AlexNet is going from an activation volume of six by six by 256 into this fully connected vector of 496. So if you imagine what the weight matrix needs to look like at that layer, the weight matrix is gigantic. It's number of entries is six by six, six times six times 256 times 496. And if you multiply that out, you see that that single layer has 38 million parameters. So more than half of the parameters of the entire AlexNet model are just sitting in that last fully connected layer. And if you add up all the parameters in just the fully connected layers of AlexNet, including these other fully connected layers, you see something like 59 of the 62 million parameters in AlexNet are sitting in these fully connected layers. So then when we move other architectures, like GoogLeNet and ResNet, they do away with a lot of these large fully connected layers in favor of global average pooling at the end of the network. And this allows these networks to really cut, these nicer architectures, to really cut down the parameter count in these architectures. So that was kind of our brief recap of the CNN architectures that we saw last lecture, and then today, we're going to move to one of my favorite topics to talk about, which is recurrent neural networks. So, so far in this class, we've seen, what I like to think of as kind of a vanilla feed forward network, all of our network architectures have this flavor, where we receive some input and that input is a fixed size object, like an image or vector. That input is fed through some set of hidden layers and produces a single output, like a classifications, like a set of classifications scores over a set of categories. But in some context in machine learning, we want to have more flexibility in the types of data that our models can process. So once we move to this idea of recurrent neural networks, we have a lot more opportunities to play around with the types of input and output data that our networks can handle. So once we have recurrent neural networks, we can do what we call these one to many models. Or where maybe our input is some object of fixed size, like an image, but now our output is a sequence of variable length, such as a caption. Where different captions might have different numbers of words, so our output needs to be variable in length. We also might have many to one models, where our input could be variably sized. This might be something like a piece of text, and we want to say what is the sentiment of that text, whether it's positive or negative in sentiment. Or in a computer vision context, you might imagine taking as input a video, and that video might have a variable number of frames. And now we want to read this entire video of potentially variable length. And then at the end, make a classification decision about maybe what kind of activity or action is going on in that video. We also have a, we might also have problems where we want both the inputs and the output to be variable in length. We might see something like this in machine translation, where our input is some, maybe, sentence in English, which could have a variable length, and our output is maybe some sentence in French, which also could have a variable length. And crucially, the length of the English sentence might be different from the length of the French sentence. So we need some models that have the capacity to accept both variable length sequences on the input and on the output. Finally, we might also consider problems where our input is variably length, like something like a video sequence with a variable number of frames. And now we want to make a decision for each element of that input sequence. So in the context of videos, that might be making some classification decision along every frame of the video. And recurrent neural networks are this kind of general paradigm for handling variable sized sequence data that allow us to pretty naturally capture all of these different types of setups in our models. So recurring neural networks are actually important, even for some problems that have a fixed size input and a fixed size output. Recurrent neural networks can still be pretty useful. So in this example, we might want to do, for example, sequential processing of our input. So here, we're receiving a fixed size input like an image, and we want to make a classification decision about, like, what number is being shown in this image? But now, rather than just doing a single feed forward pass and making the decision all at once, this network is actually looking around the image and taking various glimpses of different parts of the image. And then after making some series of glimpses, then it makes its final decision as to what kind of number is present. So here, we had one, So here, even though our input and outputs, our input was an image, and our output was a classification decision, even this context, this idea of being able to handle variably length processing with recurrent neural networks can lead to some really interesting types of models. There's a really cool paper that I like that applied this same type of idea to generating new images. Where now, we want the model to synthesize brand new images that look kind of like the images it saw in training, and we can use a recurrent neural network architecture to actually paint these output images sort of one piece at a time in the output. You can see that, even though our output is this fixed size image, we can have these models that are working over time to compute parts of the output one at a time sequentially. And we can use recurrent neural networds for that type of setup as well. So after this sort of cool pitch about all these cool things that RNNs can do, you might wonder, like what exactly are these things? So in general, a recurrent neural network is this little, has this little recurrent core cell and it will take some input x, feed that input into the RNN, and that RNN has some internal hidden state, and that internal hidden state will be updated every time that the RNN reads a new input. And that internal hidden state will be then fed back to the model the next time it reads an input. And frequently, we will want our RNN"s to also produce some output at every time step, so we'll have this pattern where it will read an input, update its hidden state, and then produce an output. So then the question is what is the functional form of this recurrence relation that we're computing? So inside this little green RNN block, we're computing some recurrence relation, with a function f. So this function f will depend on some weights, w. It will accept the previous hidden state, h t - 1, as well as the input at the current state, x t, and this will output the next hidden state, or the updated hidden state, that we call h t. And now, then as we read the next input, this hidden state, this new hidden state, h t, will then just be passed into the same function as we read the next input, x t plus one. And now, if we wanted to produce some output at every time step of this network, we might attach some additional fully connected layers that read in this h t at every time step. And make that decision based on the hidden state at every time step. And one thing to note is that we use the same function, f w, and the same weights, w, at every time step of the computation. So then kind of the simplest function form that you can imagine is what we call this vanilla recurrent neural network. So here, we have this same functional form from the previous slide, where we're taking in our previous hidden state and our current input and we need to produce the next hidden state. And the kind of simplest thing you might imagine is that we have some weight matrix, w x h, that we multiply against the input, x t, as well as another weight matrix, w h h, that we multiply against the previous hidden state. So we make these two multiplications against our two states, add them together, and squash them through a tanh, so we get some kind of non linearity in the system. You might be wondering why we use a tanh here and not some other type of non-linearity? After all that we've said negative about tanh's in previous lectures, and I think we'll return a little bit to that later on when we talk about more advanced architectures, like lstm. So then, this, So then, in addition in this architecture, if we wanted to produce some y t at every time step, you might have another weight matrix, w, you might have another weight matrix that accepts this hidden state and then transforms it to some y to produce maybe some class score predictions at every time step. And when I think about recurrent neural networks, I kind of think about, you can also, you can kind of think of recurrent neural networks in two ways. One is this concept of having a hidden state that feeds back at itself, recurrently. But I find that picture a little bit confusing. And sometimes, I find it clearer to think about unrolling this computational graph for multiple time steps. And this makes the data flow of the hidden states and the inputs and the outputs and the weights maybe a little bit more clear. So then at the first time step, we'll have some initial hidden state h zero. This is usually initialized to zeros for most context, in most contexts, an then we'll have some input, x t. This initial hidden state, h zero, and our current input, x t, will go into our f w function. This will produce our next hidden state, h one. And then, we'll repeat this process when we receive the next input. So now our current h one and our x one, will go into that same f w, to produce our next output, h two. And this process will repeat over and over again, as we consume all of the input, x ts, in our sequence of inputs. And now, one thing to note, is that we can actually make this even more explicit and write the w matrix in our computational graph. And here you can see that we're re-using the same w matrix at every time step of the computation. So now every time that we have this little f w block, it's receiving a unique h and a unique x, but all of these blocks are taking the same w. And if you remember, we talked about how gradient flows in back propagation, when you re-use the same, when you re-use the same node multiple times in a computational graph, then remember during the backward pass, you end up summing the gradients into the w matrix when you're computing a d los d w. So, if you kind of think about the back propagation for this model, then you'll have a separate gradient for w flowing from each of those time steps, and then the final gradient for w will be the sum of all of those individual per time step gradiants. We can also write to this y t explicitly in this computational graph. So then, this output, h t, at every time step might feed into some other little neural network that can produce a y t, which might be some class scores, or something like that, at every time step. We can also make the loss more explicit. So in many cases, you might imagine producing, you might imagine that you have some ground truth label at every time step of your sequence, and then you'll compute some loss, some individual loss, at every time step of these outputs, y t's. And this loss might, it will frequently be something like soft max loss, in the case where you have, maybe, a ground truth label at every time step of the sequence. And now the final loss for the entire, for this entire training stop, will be the sum of these individual losses. So now, we had a scaler loss at every time step? And we just summed them up to get our final scaler loss at the top of the network. And now, if you think about, again, back propagation through this thing, we need, in order to train the model, we need to compute the gradient of the loss with respect to w. So, we'll have loss flowing from that final loss into each of these time steps. And then each of those time steps will compute a local gradient on the weights, w, which will all then be summed to give us our final gradient for the weights, w. Now if we have a, sort of, this many to one situation, where maybe we want to do something like sentiment analysis, then we would typically make that decision based on the final hidden state of this network. Because this final hidden state kind of summarizes all of the context from the entire sequence. Also, if we have a kind of a one to many situation, where we want to receive a fix sized input and then produce a variably sized output. Then you'll commonly use that fixed size input to initialize, somehow, the initial hidden state of the model, and now the recurrent network will tick for each cell in the output. And now, as you produce your variably sized output, you'll unroll the graph for each element in the output. So this, when we talk about the sequence to sequence models where you might do something like machine translation, where you take a variably sized input and a variably sized output. You can think of this as a combination of the many to one, plus a one to many. So, we'll kind of proceed in two stages, what we call an encoder and a decoder. So if you're the encoder, we'll receive the variably sized input, which might be your sentence in English, and then summarize that entire sentence using the final hidden state of the encoder network. And now we're in this many to one situation where we've summarized this entire variably sized input in this single vector, and now, we have a second decoder network, which is a one to many situation, which will input that single vector summarizing the input sentence and now produce this variably sized output, which might be your sentence in another language. And now in this variably sized output, we might make some predictions at every time step, maybe about what word to use. And you can imagine kind of training this entire thing by unrolling this computational graph summing the losses at the output sequence and just performing back propagation, as usual. So as a bit of a concrete example, one thing that we frequently use recurrent neural networks for, is this problem called language modeling. So in the language modeling problem, we want to read some sequence of, we want to have our network, sort of, understand how to produce natural language. So in the, so this, this might happen at the character level where our model will produce characters one at a time. This might also happen at the word level where our model will produce words one at a time. But in a very simple example, you can imagine this character level language model where we want, where the network will read some sequence of characters and then it needs to predict, what will the next character be in this stream of text? So in this example, we have this very small vocabulary of four letters, h, e, l, and o, and we have this example training sequence of the word hello, h, e, l, l, o. So during training, when we're training this language model, we will feed the characters of this training sequence as inputs, as x ts, to out input of our, we'll feed the characters of our training sequence, these will be the x ts that we feed in as the inputs to our recurrent neural network. And then, each of these inputs, it's a letter, and we need to figure out a way to represent letters in our network. So what we'll typically do is figure out what is our total vocabulary. In this case, our vocabulary has four elements. And each letter will be represented by a vector that has zeros in every slot but one, and a one for the slot in the vocabulary corresponding to that letter. In this little example, since our vocab has the four letters, h, e, l, o, then our input sequence, the h is represented by a four element vector with a one in the first slot and zero's in the other three slots. And we use the same sort of pattern to represent all the different letters in the input sequence. Now, during this forward pass of what this network is doing, at the first time step, it will receive the input letter h. That will go into the first RNN, to the RNN cell, and then we'll produce this output, y t, which is the network making predictions about for each letter in the vocabulary, which letter does it think is most likely going to come next. In this example, the correct output letter was e because our training sequence was hello, but the model is actually predicting, I think it's actually predicting o as the most likely letter. So in this case, this prediction was wrong and we would use softmaxt loss to quantify our unhappiness with these predictions. The next time step, we would feed in the second letter in the training sequence, e, and this process will repeat. We'll now represent e as a vector. Use that input vector together with the previous hidden state to produce a new hidden state and now use the second hidden state to, again, make predictions over every letter in the vocabulary. In this case, because our training sequence was hello, after the letter e, we want our model to predict l. In this case, our model may have very low predictions for the letter l, so we would incur high loss. And you kind of repeat this process over and over, and if you train this model with many different sequences, then eventually it should learn how to predict the next character in a sequence based on the context of all the previous characters that it's seen before. And now, if you think about what happens at test time, after we train this model, one thing that we might want to do with it is a sample from the model, and actually use this trained neural network model to synthesize new text that kind of looks similar in spirit to the text that it was trained on. The way that this will work is we'll typically see the model with some input prefix of text. In this case, the prefix is just the single letter h, and now we'll feed that letter h through the first time step of our recurrent neural network. It will product this distribution of scores over all the characters in the vocabulary. Now, at training time, we'll use these scores to actually sample from it. So we'll use a softmaxt function to convert those scores into a probability distribution and then we will sample from that probability distribution to actually synthesize the second letter in the sequence. And in this case, even though the scores were pretty bad, maybe we got lucky and sampled the letter e from this probability distribution. And now, we'll take this letter e that was sampled from this distribution and feed it back as input into the network at the next time step. Now, we'll take this e, pull it down from the top, feed it back into the network as one of these, sort of, one hot vectorial representations, and then repeat the process in order to synthesize the second letter in the output. And we can repeat this process over and over again to synthesize a new sequence using this trained model, where we're synthesizing the sequence one character at a time using these predicted probability distributions at each time step. Question? Yeah, that's a great question. So the question is why might we sample instead of just taking the character with the largest score? In this case, because of the probability distribution that we had, it was impossible to get the right character, so we had the sample so the example could work out, and it would make sense. But in practice, sometimes you'll see both. So sometimes you'll just take the argmax probability, and that will sometimes be a little bit more stable, but one advantage of sampling, in general, is that it lets you get diversity from your models. Sometimes you might have the same input, maybe the same prefix, or in the case of image captioning, maybe the same image. But then if you sample rather than taking the argmax, then you'll see that sometimes these trained models are actually able to produce multiple different types of reasonable output sequences, depending on the kind, depending on which samples they take at the first time steps. It's actually kind of a benefit cause we can get now more diversity in our outputs. Another question? Could we feed in the softmax vector instead of the one element vector? You mean at test time? Yeah yeah, so the question is, at test time, could we feed in this whole softmax vector rather than a one hot vector? There's kind of two problems with that. One is that that's very different from the data that it saw at training time. In general, if you ask your model to do something at test time, which is different from training time, then it'll usually blow up. It'll usually give you garbage and you'll usually be sad. The other problem is that in practice, our vocabularies might be very large. So maybe, in this simple example, our vocabulary is only four elements, so it's not a big problem. But if you're thinking about generating words one at a time, now your vocabulary is every word in the English language, which could be something like tens of thousands of elements. So in practice, this first element, this first operation that's taking in this one hot vector, is often performed using sparse vector operations rather than dense factors. It would be, sort of, computationally really bad if you wanted to have this load of 10,000 elements softmax vector. So that's usually why we use a one hot instead, even at test time. This idea that we have a sequence and we produce an output at every time step of the sequence and then finally compute some loss, this is sometimes called backpropagation through time because you're imagining that in the forward pass, you're kind of stepping forward through time and then during the backward pass, you're sort of going backwards through time to compute all your gradients. This can actually be kind of problematic if you want to train the sequences that are very, very long. So if you imagine that we were kind of trying to train a neural network language model on maybe the entire text of Wikipedia, which is, by the way, something that people do pretty frequently, this would be super slow, and every time we made a gradient step, we would have to make a forward pass through the entire text of all of wikipedia, and then make a backward pass through all of wikipedia, and then make a single gradient update. And that would be super slow. Your model would never converge. It would also take a ridiculous amount of memory so this would be just really bad. In practice, what people do is this, sort of, approximation called truncated backpropagation through time. Here, the idea is that, even though our input sequence is very, very long, and even potentially infinite, what we'll do is that during, when we're training the model, we'll step forward for some number of steps, maybe like a hundred is kind of a ballpark number that people frequently use, and we'll step forward for maybe a hundred steps, compute a loss only over this sub sequence of the data, and then back propagate through this sub sequence, and now make a gradient step. And now, when we repeat, well, we still have these hidden states that we computed from the first batch, and now, when we compute this next batch of data, we will carry those hidden states forward in time, so the forward pass will be exactly the same. But now when we compute a gradient step for this next batch of data, we will only backpropagate again through this second batch. Now, we'll make a gradient step based on this truncated backpropagation through time. This process will continue, where now when we make the next batch, we'll again copy these hidden states forward, but then step forward and then step backward, but only for some small number of time steps. So this is, you can kind of think of this as being an alegist who's the cast at gradient descent in the case of sequences. Remember, when we talked about training our models on large data sets, then these data sets, it would be super expensive to compute the gradients over every element in the data set. So instead, we kind of take small samples, small mini batches instead, and use mini batches of data to compute gradient stops in any kind of image classification case. Question? Is this kind of, the question is, is this kind of making the Mark Hobb assumption? No, not really. Because we're carrying this hidden state forward in time forever. It's making a Marcovian assumption in the sense that, conditioned on the hidden state, but the hidden state is all that we need to predict the entire future of the sequence. But that assumption is kind of built into the recurrent neural network formula from the start. And that's not really particular to back propagation through time. Back propagation through time, or sorry, truncated back prop though time is just the way to approximate these gradients without going making a backwards pass through your potentially very large sequence of data. This all sounds very complicated and confusing and it sounds like a lot of code to write, but in fact, this can acutally be pretty concise. Andrea has this example of what he calls min-char-rnn, that does all of this stuff in just like a 112 lines of Python. It handles building the vocabulary. It trains the model with truncated back propagation through time. And then, it can actually sample from that model in actually not too much code. So even though this sounds like kind of a big, scary process, it's actually not too difficult. I'd encourage you, if you're confused, to maybe go check this out and step through the code on your own time, and see, kind of, all of these concrete steps happening in code. So this is all in just a single file, all using numpy with no dependencies. This was relatively easy to read. So then, once we have this idea of training a recurrent neural network language model, we can actually have a lot of fun with this. And we can take in, sort of, any text that we want. Take in, like, whatever random text you can think of from the internet, train our recurrent neural network language model on this text, and then generate new text. So in this example, we took this entire text of all of Shakespeare's works, and then used that to train a recurrent neural network language model on all of Shakespeare. And you can see that the beginning of training, it's kind of producing maybe random gibberish garbage, but throughout the course of training, it ends up producing things that seem relatively reasonable. And after you've, after this model has been trained pretty well, then it produces text that seems, kind of, Shakespeare-esque to me. "Why do what that day," replied, whatever, right, you can read this. Like, it kind of looks kind of like Shakespeare. And if you actually train this model even more, and let it converge even further, and then sample these even longer sequences, you can see that it learns all kinds of crazy cool stuff that really looks like a Shakespeare play. It knows that it uses, maybe, these headings to say who's speaking. Then it produces these bits of text that have crazy dialogue that sounds kind of Shakespeare-esque. It knows to put line breaks in between these different things. And this is all, like, really cool, all just sort of learned from the structure of the data. We can actually get even crazier than this. This was one of my favorite examples. I found online, there's this. Is anyone a mathematician in this room? Has anyone taken an algebraic topology course by any chance? Wow, a couple, that's impressive. So you probably know more algebraic topology than me, but I found this open source algebraic topology textbook online. It's just a whole bunch of tech files that are like this super dense mathematics. And LaTac, cause LaTac is sort of this, let's you write equations and diagrams and everything just using plain text. We can actually train our recurrent neural network language model on the raw Latac source code of this algebraic topology textbook. And if we do that, then after we sample from the model, then we get something that seems like, kind of like algebraic topology. So it knows to like put equations. It puts all kinds of crazy stuff. It's like, to prove study, we see that F sub U is a covering of x prime, blah, blah, blah, blah, blah. It knows where to put unions. It knows to put squares at the end of proofs. It makes lemmas. It makes references to previous lemmas. Right, like we hear, like. It's namely a bi-lemma question. We see that R is geometrically something. So it's actually pretty crazy. It also sometimes tries to make diagrams. For those of you that have taken algebraic topology, you know that these commutative diagrams are kind of a thing that you work with a lot So it kind of got the general gist of how to make those diagrams, but they actually don't make any sense. And actually, one of my favorite examples here is that it sometimes omits proofs. So it'll sometimes say, it'll sometimes say something like theorem, blah, blah, blah, blah, blah, proof omitted. This thing kind of has gotten the gist of how some of these math textbooks look like. We can have a lot of fun with this. So we also tried training one of these models on the entire source code of the Linux kernel. 'Cause again, this character level stuff that we can train on, And then, when we sample this, it acutally again looks like C source code. It knows how to write if statements. It has, like, pretty good code formatting skills. It knows to indent after these if statements. It knows to put curly braces. It actually even makes comments about some things that are usually nonsense. One problem with this model is that it knows how to declare variables. But it doesn't always use the variables that it declares. And sometimes it tries to use variables that haven't been declared. This wouldn't compile. I would not recommend sending this as a pull request to Linux. This thing also figures out how to recite the GNU, this GNU license character by character. It kind of knows that you need to recite the GNU license and after the license comes some includes, then some other includes, then source code. This thing has actually learned quite a lot about the general structure of the data. Where, again, during training, all we asked this model to do was try to predict the next character in the sequence. We didn't tell it any of this structure, but somehow, just through the course of this training process, it learned a lot about the latent structure in the sequential data. Yeah, so it knows how to write code. It does a lot of cool stuff. I had this paper with Andre a couple years ago where we trained a bunch of these models and then we wanted to try to poke into the brains of these models and figure out like what are they doing and why are they working. So we saw, in our, these recurring neural networks has this hidden vector which is, maybe, some vector that's updated over every time step. And then what we wanted to try to figure out is could we find some elements of this vector that have some Symantec interpretable meaning. So what we did is we trained a neural network language model, one of these character level models on one of these data sets, and then we picked one of the elements in that hidden vector and now we look at what is the value of that hidden vector over the course of a sequence to try to get some sense of maybe what these different hidden states are looking for. When you do this, a lot of them end up looking kind of like random gibberish garbage. So here again, what we've done, is we've picked one element of that vector, and now we run the sequence forward through the trained model, and now the color of each character corresponds to the magnitude of that single scaler element of the hidden vector at every time step when it's reading the sequence. So you can see that a lot of the vectors in these hidden states are kind of not very interpretable. It seems like they're kind of doing some of this low level language modeling to figure out what character should come next. But some of them end up quite nice. So here we found this vector that is looking for quotes. You can see that there's this one hidden element, this one element in the vector, that is off, off, off, off, off blue and then once it hits a quote, it turns on and remains on for the duration of this quote. And now when we hit the second quotation mark, then that cell turns off. So somehow, even though this model was only trained to predict the next character in a sequence, it somehow learned that a useful thing, in order to do this, might be to have some cell that's trying to detect quotes. We also found this other cell that is, looks like it's counting the number of characters since a line break. So you can see that at the beginning of each line, this element starts off at zero. Throughout the course of the line, it's gradually more red, so that value increases. And then after the new line character, it resets to zero. So you can imagine that maybe this cell is letting the network keep track of when it needs to write to produce these new line characters. We also found some that, when we trained on the linux source code, we found some examples that are turning on inside the conditions of if statements. So this maybe allows the network to differentiate whether it's outside an if statement or inside that condition, which might help it model these sequences better. We also found some that turn on in comments, or some that seem like they're counting the number of indentation levels. This is all just really cool stuff because it's saying that even though we are only trying to train this model to predict next characters, it somehow ends up learning a lot of useful structure about the input data. One kind of thing that we often use, so this is not really been computer vision so far, and we need to pull this back to computer vision since this is a vision class. We've alluded many times to this image captioning model where we want to build models that can input an image and then output a caption in natural language. There were a bunch of papers a couple years ago that all had relatively similar approaches. But I'm showing the figure from the paper from our lab in a totally un-biased way. But, the idea here is that the caption is this variably length sequence that we might, the sequence might have different numbers of words for different captions. So this is a totally natural fit for a recurrent neural network language model. So then what this model looks like is we have some convolutional network which will input the, which will take as input the image, and we've seen a lot about how convolution networks work at this point, and that convolutional network will produce a summary vector of the image which will then feed into the first time step of one of these recurrent neural network language models which will then produce words of the caption one at a time. So the way that this kind of works at test time after the model is trained looks almost exactly the same as these character level language models that we saw a little bit ago. We'll take our input image, feed it through our convolutional network. But now instead of taking the softmax scores from an image net model, we'll instead take this 4,096 dimensional vector from the end of the model, and we'll take that vector and use it to summarize the whole content of the image. Now, remember when we talked about RNN language models, we said that we need to see the language model with that first initial input to tell it to start generating text. So in this case, we'll give it some special start token, which is just saying, hey, this is the start of a sentence. Please start generating some text conditioned on this image information. So now previously, we saw that in this RNN language model, we had these matrices that were taking the previous, the input at the current time step and the hidden state of the previous time step and combining those to get the next hidden state. Well now, we also need to add in this image information. So one way, people play around with exactly different ways to incorporate this image information, but one simple way is just to add a third weight matrix that is adding in this image information at every time step to compute the next hidden state. So now, we'll compute this distribution over all scores in our vocabulary and here, our vocabulary is something like all English words, so it could be pretty large. We'll sample from that distribution and now pass that word back as input at the next time step. And that will then feed that word in, again get a distribution over all words in the vocab, and again sample to produce the next word. So then, after that thing is all done, we'll maybe generate, we'll generate this complete sentence. We stop generation once we sample the special ends token, which kind of corresponds to the period at the end of the sentence. Then once the network samples this ends token, we stop generation and we're done and we've gotten our caption for this image. And now, during training, we trained this thing to generate, like we put an end token at the end of every caption during training so that the network kind of learned during training that end tokens come at the end of sequences. So then, during test time, it tends to sample these end tokens once it's done generating. So we trained this model in kind of a completely supervised way. You can find data sets that have images together with natural language captions. Microsoft COCO is probably the biggest and most widely used for this task. But you can just train this model in a purely supervised way. And then backpropagate through to jointly train both this recurrent neural network language model and then also pass gradients back into this final layer of this the CNN and additionally update the weights of the CNN to jointly tune all parts of the model to perform this task. Once you train these models, they actually do some pretty reasonable things. These are some real results from a model, from one of these trained models, and it says things like a cat sitting on a suitcase on the floor, which is pretty impressive. It knows about cats sitting on a tree branch, which is also pretty cool. It knows about two people walking on the beach with surfboards. So these models are actually pretty powerful and can produce relatively complex captions to describe the image. But that being said, these models are really not perfect. They're not magical. Just like any machine learning model, if you try to run them on data that was very different from the training data, they don't work very well. So for example, this example, it says a woman is holding a cat in her hand. There's clearly no cat in the image. But she is wearing a fur coat, and maybe the texture of that coat kind of looked like a cat to the model. Over here, we see a woman standing on a beach holding a surfboard. Well, she's definitely not holding a surfboard and she's doing a handstand, which is maybe the interesting part of that image, and the model totally missed that. Also, over here, we see this example where there's this picture of a spider web in the tree branch, and it totally, and it says something like a bird sitting on a tree branch. So it totally missed the spider, but during training, it never really saw examples of spiders. It just knows that birds sit on tree branches during training. So it kind of makes these reasonable mistakes. Or here at the bottom, it can't really tell the difference between this guy throwing and catching the ball, but it does know that it's a baseball player and there's balls and things involved. So again, just want to say that these models are not perfect. They work pretty well when you ask them to caption images that were similar to the training data, but they definitely have a hard time generalizing far beyond that. So another thing you'll sometimes see is this slightly more advanced model called Attention, where now when we're generating the words of this caption, we can allow the model to steer it's attention to different parts of the image. And I don't want to spend too much time on this. But the general way that this works is that now our convolutional network, rather than producing a single vector summarizing the entire image, now it produces some grid of vectors that summarize the, that give maybe one vector for each spatial location in the image. And now, when we, when this model runs forward, in addition to sampling the vocabulary at every time step, it also produces a distribution over the locations in the image where it wants to look. And now this distribution over image locations can be seen as a kind of a tension of where the model should look during training. So now that first hidden state computes this distribution over image locations, which then goes back to the set of vectors to give a single summary vector that maybe focuses the attention on one part of that image. And now that summary vector gets fed, as an additional input, at the next time step of the neural network. And now again, it will produce two outputs. One is our distribution over vocabulary words. And the other is a distribution over image locations. This whole process will continue, and it will sort of do these two different things at every time step. And after you train the model, then you can see that it kind of will shift it's attention around the image for every word that it generates in the caption. Here you can see that it produced the caption, a bird is flying over, I can't see that far. But you can see that its attention is shifting around different parts of the image for each word in the caption that it generates. There's this notion of hard attention versus soft attention, which I don't really want to get into too much, but with this idea of soft attention, we're kind of taking a weighted combination of all features from all image locations, whereas in the hard attention case, we're forcing the model to select exactly one location to look at in the image at each time step. So the hard attention case where we're selecting exactly one image location is a little bit tricky because that is not really a differentiable function, so you need to do something slightly fancier than vanilla backpropagation in order to just train the model in that scenario. And I think we'll talk about that a little bit later in the lecture on reinforcement learning. Now, when you look at after you train one of these attention models and then run it on to generate captions, you can see that it tends to focus it's attention on maybe the salient or semanticly meaningful part of the image when generating captions. You can see that the caption was a woman is throwing a frisbee in a park and you can see that this attention mask, when it generated the word, when the model generated the word frisbee, at the same time, it was focusing it's attention on this image region that actually contains the frisbee. This is actually really cool. We did not tell the model where it should be looking at every time step. It sort of figured all that out for itself during the training process. Because somehow, it figured out that looking at that image region was the right thing to do for this image. And because everything in this model is differentiable, because we can backpropagate through all these soft attention steps, all of this soft attention stuff just comes out through the training process. So that's really, really cool. By the way, this idea of recurrent neural networks and attention actually gets used in other tasks beyond image captioning. One recent example is this idea of visual question answering. So here, our model is going to take two things as input. It's going to take an image and it will also take a natural language question that's asking some question about the image. Here, we might see this image on the left and we might ask the question, what endangered animal is featured on the truck? And now the model needs to select from one of these four natural language answers about which of these answers correctly answers that question in the context of the image. So you can imagine kind of stitching this model together using CNNs and RNNs in kind of a natural way. Now, we're in this many to one scenario, where now our model needs to take as input this natural language sequence, so we can imagine running a recurrent neural network over each element of that input question, to now summarize the input question in a single vector. And then we can have a CNN to again summarize the image, and now combine both the vector from the CNN and the vector from the question and coding RNN to then predict a distribution over answers. We also sometimes, you'll also sometimes see this idea of soft spacial attention being incorporated into things like visual question answering. So you can see that here, this model is also having the spatial attention over the image when it's trying to determine answers to the questions. Just to, yeah, question? So the question is How are the different inputs combined? Do you mean like the encoded question vector and the encoded image vector? Yeah, so the question is how are the encoded image and the encoded question vector combined? Kind of the simplest thing to do is just to concatenate them and stick them into fully connected layers. That's probably the most common and that's probably the first thing to try. Sometimes people do slightly fancier things where they might try to have multiplicative interactions between those two vectors to allow a more powerful function. But generally, concatenation is kind of a good first thing to try. Okay, so now we've talked about a bunch of scenarios where RNNs are used for different kinds of problems. And I think it's super cool because it allows you to start tackling really complicated problems combining images and computer vision with natural language processing. And you can see that we can kind of stith together these models like Lego blocks and attack really complicated things, Like image captioning or visual question answering just by stitching together these relatively simple types of neural network modules. But I'd also like to mention that so far, we've talked about this idea of a single recurrent network layer, where we have sort of one hidden state, and another thing that you'll see pretty commonly is this idea of a multilayer recurrent neural network. Here, this is a three layer recurrent neural network, so now our input goes in, goes into, goes in and produces a sequence of hidden states from the first recurrent neural network layer. And now, after we run kind of one recurrent neural network layer, then we have this whole sequence of hidden states. And now, we can use the sequence of hidden states as an input sequence to another recurrent neural network layer. And then you can just imagine, which will then produce another sequence of hidden states from the second RNN layer. And then you can just imagine stacking these things on top of each other, cause we know that we've seen in other contexts that deeper models tend to perform better for various problems. And the same kind of holds in RNNs as well. For many problems, you'll see maybe a two or three layer recurrent neural network model is pretty commonly used. You typically don't see super deep models in RNNs. So generally, like two, three, four layer RNNs is maybe as deep as you'll typically go. Then, I think it's also really interesting and important to think about, now we've seen kind of what kinds of problems these RNNs can be used for, but then you need to think a little bit more carefully about exactly what happens to these models when we try to train them. So here, I've drawn this little vanilla RNN cell that we've talked about so far. So here, we're taking our current input, x t, and our previous hidden state, h t minus one, and then we stack, those are two vectors. So we can just stack them together. And then perform this matrix multiplication with our weight matrix, to give our, and then squash that output through a tanh, and that will give us our next hidden state. And that's kind of the basic functional form of this vanilla recurrent neural network. But then, we need to think about what happens in this architecture during the backward pass when we try to compute gradients? So then if we think about trying to compute, so then during the backwards pass, we'll receive the derivative of our h t, we'll receive derivative of loss with respect to h t. And during the backward pass through the cell, we'll need to compute derivative of loss to the respect of h t minus one. Then, when we compute this backward pass, we see that the gradient flows backward through this red path. So first, that gradient will flow backwards through this tanh gate, and then it will flow backwards through this matrix multiplication gate. And then, as we've seen in the homework and when implementing these matrix multiplication layers, when you backpropagate through this matrix multiplication gate, you end up mulitplying by the transpose of that weight matrix. So that means that every time we backpropagate through one of these vanilla RNN cells, we end up multiplying by some part of the weight matrix. So now if you imagine that we are sticking many of these recurrent neural network cells in sequence, because again this is an RNN. We want a model sequences. Now if you imagine what happens to the gradient flow through a sequence of these layers, then something kind of fishy starts to happen. Because now, when we want to compute the gradient of the loss with respect to h zero, we need to backpropagate through every one of these RNN cells. And every time you backpropagate through one cell, you'll pick up one of these w transpose factors. So that means that the final expression for the gradient on h zero will involve many, many factors of this weight matrix, which could be kind of bad. Maybe don't think about the weight, the matrix case, but imagine a scaler case. If we end up, if we have some scaler and we multiply by that same number over and over and over again, maybe not for four examples, but for something like a hundred or several hundred time steps, then multiplying by the same number over and over again is really bad. In the scaler case, it's either going to explode in the case that that number is greater than one or it's going to vanish towards zero in the case that number is less than one in absolute value. And the only way in which this will not happen is if that number is exactly one, which is actually very rare to happen in practice. That leaves us to, that same intuition extends to the matrix case, but now, rather than the absolute value of a scaler number, you instead need to look at the largest, the largest singular value of this weight matrix. Now if that largest singular value is greater than one, then during this backward pass, when we multiply by the weight matrix over and over, that gradient on h w, on h zero, sorry, will become very, very large, when that matrix is too large. And that's something we call the exploding gradient problem. Where now this gradient will explode exponentially in depth with the number of time steps that we backpropagate through. And if the largest singular value is less than one, then we get the opposite problem, where now our gradients will shrink and shrink and shrink exponentially, as we backpropagate and pick up more and more factors of this weight matrix. That's called the vanishing gradient problem. THere's a bit of a hack that people sometimes do to fix the exploding gradient problem called gradient clipping, which is just this simple heuristic saying that after we compute our gradient, if that gradient, if it's L2 norm is above some threshold, then just clamp it down and divide, just clamp it down so it has this maximum threshold. This is kind of a nasty hack, but it actually gets used in practice quite a lot when training recurrent neural networks. And it's a relatively useful tool for attacking this exploding gradient problem. But now for the vanishing gradient problem, what we typically do is we might need to move to a more complicated RNN architecture. So that motivates this idea of an LSTM. An LSTM, which stands for Long Short Term Memory, is this slightly fancier recurrence relation for these recurrent neural networks. It's really designed to help alleviate this problem of vanishing and exploding gradients. So that rather than kind of hacking on top of it, we just kind of design the architecture to have better gradient flow properties. Kind of an analogy to those fancier CNN architectures that we saw at the top of the lecture. Another thing to point out is that the LSTM cell actually comes from 1997. So this idea of an LSTM has been around for quite a while, and these folks were working on these ideas way back in the 90s, were definitely ahead of the curve. Because these models are kind of used everywhere now 20 years later. And LSTMs kind of have this funny functional form. So remember when we had this vanilla recurrent neural network, it had this hidden state. And we used this recurrence relation to update the hidden state at every time step. Well, now in an LSTM, we actually have two, we maintain two hidden states at every time step. One is this h t, which is called the hidden state, which is kind of an analogy to the hidden state that we had in the vanilla RNN. But an LSTM also maintains the second vector, c t, called the cell state. And the cell state is this vector which is kind of internal, kept inside the LSTM, and it does not really get exposed to the outside world. And we'll see, and you can kind of see that through this update equation, where you can see that when we, first when we compute these, we take our two inputs, we use them to compute these four gates called i, f, o, n, g. We use those gates to update our cell states, c t, and then we expose part of our cell state as the hidden state at the next time step. This is kind of a funny functional form, and I want to walk through for a couple slides exactly why do we use this architecture and why does it make sense, especially in the context of vanishing or exploding gradients. This first thing that we do in an LSTM is that we're given this previous hidden state, h t, and we're given our current input vector, x t, and just like the vanilla RNN. In the vanilla RNN, remember, we took those two input vectors. We concatenated them. Then we did a matrix multiply to directly compute the next hidden state in the RNN. Now, the LSTM does something a little bit different. We're going to take our previous hidden state and our current input, stack them, and now multiply by a very big weight matrix, w, to compute four different gates, Which all have the same size as the hidden state. Sometimes, you'll see this written in different ways. Some authors will write a different weight matrix for each gate. Some authors will combine them all into one big weight matrix. But it's all really the same thing. The ideas is that we take our hidden state, our current input, and then we use those to compute these four gates. These four gates are the, you often see this written as i, f, o, g, ifog, which makes it pretty easy to remember what they are. I is the input gate. It says how much do we want to input into our cell. F is the forget gate. How much do we want to forget the cell memory at the previous, from the previous time step. O is the output gate, which is how much do we want to reveal ourself to the outside world. And G really doesn't have a nice name, so I usually call it the gate gate. G, it tells us how much do we want to write into our input cell. And then you notice that each of these four gates are using a different non linearity. The input, forget and output gate are all using sigmoids, which means that their values will be between zero and one. Whereas the gate gate uses a tanh, which means it's output will be between minus one and one. So, these are kind of weird, but it makes a little bit more sense if you imagine them all as binary values. Right, like what happens at the extremes of these two values? It's kind of what happens, if you look after we compute these gates if you look at this next equation, you can see that our cell state is being multiplied element wise by the forget gate. Sorry, our cell state from the previous time step is being multiplied element wise by this forget gate. And now if this forget gate, you can think of it as being a vector of zeros and ones, that's telling us for each element in the cell state, do we want to forget that element of the cell in the case if the forget gate was zero? Or do we want to remember that element of the cell in the case if the forget gate was one. Now, once we've used the forget gate to gate off the part of the cell state, then we have the second term, which is the element wise product of i and g. So now, i is this vector of zeros and ones, cause it's coming through a sigmoid, telling us for each element of the cell state, do we want to write to that element of the cell state in the case that i is one, or do we not want to write to that element of the cell state at this time step in the case that i is zero. And now the gate gate, because it's coming through a tanh, will be either one or minus one. So that is the value that we want, the candidate value that we might consider writing to each element of the cell state at this time step. Then if you look at the cell state equation, you can see that at every time step, the cell state has these kind of these different, independent scaler values, and they're all being incremented or decremented by one. So there's kind of like, inside the cell state, we can either remember or forget our previous state, and then we can either increment or decrement each element of that cell state by up to one at each time step. So you can kind of think of these elements of the cell state as being little scaler integer counters that can be incremented and decremented at each time step. And now, after we've computed our cell state, then we use our now updated cell state to compute a hidden state, which we will reveal to the outside world. So because this cell state has this interpretation of being counters, and sort of counting up by one or minus one at each time step, we want to squash that counter value into a nice zero to one range using a tanh. And now, we multiply element wise, by this output gate. And the output gate is again coming through a sigmoid, so you can think of it as being mostly zeros and ones, and the output gate tells us for each element of our cell state, do we want to reveal or not reveal that element of our cell state when we're computing the external hidden state for this time step. And then, I think there's kind of a tradition in people trying to explain LSTMs, that everyone needs to come up with their own potentially confusing LSTM diagram. So here's my attempt. Here, we can see what's going on inside this LSTM cell, is that we take our, we're taking as input on the left our previous cell state and the previous hidden state, as well as our current input, x t. Now we're going to take our current, our previous hidden state, as well as our current input, stack them, and then multiply with this weight matrix, w, to produce our four gates. And here, I've left out the non linearities because we saw those on a previous slide. And now the forget gate multiplies element wise with the cell state. The input and gate gate are multiplied element wise and added to the cell state. And that gives us our next cell. The next cell gets squashed through a tanh, and multiplied element wise with this output gate to produce our next hidden state. Question? No, So they're coming through this, they're coming from different parts of this weight matrix. So if our hidden, if our x and our h all have this dimension h, then after we stack them, they'll be a vector size two h, and now our weight matrix will be this matrix of size four h times two h. So you can think of that as sort of having four chunks of this weight matrix. And each of these four chunks of the weight matrix is going to compute a different one of these gates. You'll often see this written for clarity, kind of combining all four of those different weight matrices into a single large matrix, w, just for notational convenience. But they're all computed using different parts of the weight matrix. But you're correct in that they're all computed using the same functional form of just stacking the two things and taking the matrix multiplication. Now that we have this picture, we can think about what happens to an LSTM cell during the backwards pass? We saw, in the context of vanilla recurrent neural network, that some bad things happened during the backwards pass, where we were continually multiplying by that weight matrix, w. But now, the situation looks much, quite a bit different in the LSTM. If you imagine this path backwards of computing the gradients of the cell state, we get quite a nice picture. Now, when we have our upstream gradient from the cell coming in, then once we backpropagate backwards through this addition operation, remember that this addition just copies that upstream gradient into the two branches, so our upstream gradient gets copied directly and passed directly to backpropagating through this element wise multiply. So then our upstream gradient ends up getting multiplied element wise by the forget gate. As we backpropagate backwards through this cell state, the only thing that happens to our upstream cell state gradient is that it ends up getting multiplied element wise by the forget gate. This is really a lot nicer than the vanilla RNN for two reasons. One is that this forget gate is now an element wise multiplication rather than a full matrix multiplication. So element wise multiplication is going to be a little bit nicer than full matrix multiplication. Second is that element wise multiplication will potentially be multiplying by a different forget gate at every time step. So remember, in the vanilla RNN, we were continually multiplying by that same weight matrix over and over again, which led very explicitly to these exploding or vanishing gradients. But now in the LSTM case, this forget gate can vary from each time step. Now, it's much easier for the model to avoid these problems of exploding and vanishing gradients. Finally, because this forget gate is coming out from a sigmoid, this element wise multiply is guaranteed to be between zero and one, which again, leads to sort of nicer numerical properties if you imagine multiplying by these things over and over again. Another thing to notice is that in the context of the vanilla recurrent neural network, we saw that during the backward pass, our gradients were flowing through also a tanh at every time step. But now, in an LSTM, our outputs are, in an LSTM, our hidden state is used to compute those outputs, y t, so now, each hidden state, if you imagine backpropagating from the final hidden state back to the first cell state, then through that backward path, we only backpropagate through a single tanh non linearity rather than through a separate tanh at every time step. So kind of when you put all these things together, you can see this backwards pass backpropagating through the cell state is kind of a gradient super highway that lets gradients pass relatively unimpeded from the loss at the very end of the model all the way back to the initial cell state at the beginning of the model. Was there a question? Yeah, what about the gradient in respect to w? 'Cause that's ultimately the thing that we care about. So, the gradient with respect to w will come through, at every time step, will take our current cell state as well as our current hidden state and that will give us an element, that will give us our local gradient on w for that time step. So because our cell state, and just in the vanilla RNN case, we'll end up adding those first time step w gradients to compute our final gradient on w. But now, if you imagine the situation where we have a very long sequence, and we're only getting gradients to the very end of the sequence. Now, as you backpropagate through, we'll get a local gradient on w for each time step, and that local gradient on w will be coming through these gradients on c and h. So because we're maintaining the gradients on c much more nicely in the LSTM case, those local gradients on w at each time step will also be carried forward and backward through time much more cleanly. Another question? Yeah, so the question is due to the non linearities, could this still be susceptible to vanishing gradients? And that could be the case. Actually, so one problem you might imagine is that maybe if these forget gates are always less than zero, or always less than one, you might get vanishing gradients as you continually go through these forget gates. Well, one sort of trick that people do in practice is that they will, sometimes, initialize the biases of the forget gate to be somewhat positive. So that at the beginning of training, those forget gates are always very close to one. So that at least at the beginning of training, then we have not so, relatively clean gradient flow through these forget gates, since they're all initialized to be near one. And then throughout the course of training, then the model can learn those biases and kind of learn to forget where it needs to. You're right that there still could be some potential for vanishing gradients here. But it's much less extreme than the vanilla RNN case, both because those fs can vary at each time step, and also because we're doing this element wise multiplication rather than a full matrix multiplication. So you can see that this LSTM actually looks quite similar to ResNet. In this residual network, we had this path of identity connections going backward through the network and that gave, sort of a gradient super highway for gradients to flow backward in ResNet. And now it's kind of the same intuition in LSTM where these additive and element wise multiplicative interactions of the cell state can give a similar gradient super highway for gradients to flow backwards through the cell state in an LSTM. And by the way, there's this other kind of nice paper called highway networks, which is kind of in between this idea of this LSTM cell and these residual networks. So these highway networks actually came before residual networks, and they had this idea where at every layer of the highway network, we're going to compute sort of a candidate activation, as well as a gating function that tells us that interprelates between our previous input at that layer, and that candidate activation that came through our convolutions or what not. So there's actually a lot of architectural similarities between these things, and people take a lot of inspiration from training very deep CNNs and very deep RNNs and there's a lot of crossover here. Very briefly, you'll see a lot of other types of variance of recurrent neural network architectures out there in the wild. Probably the most common, apart from the LSTM, is this GRU, called the gated recurrent unit. And you can see those update equations here, and it kind of has this similar flavor of the LSTM, where it uses these multiplicative element wise gates together with these additive interactions to avoid this vanishing gradient problem. There's also this cool paper called LSTM: a search based oddysey, very inventive title, where they tried to play around with the LSTM equations and swap out the non linearities at one point, like do we really need that tanh for exposing the output gate, and they tried to answer a lot of these different questions about each of those non linearities, each of those pieces of the LSTM update equations. What happens if we change the model and tweak those LSTM equations a little bit. And kind of the conclusion is that they all work about the same Some of them work a little bit better than others for one problem or another. But generally, none of the things, none of the tweaks of LSTM that they tried were significantly better that the original LSTM for all problems. So that gives you a little bit more faith that the LSTM update equations seem kind of magical but they're useful anyway. You should probably consider them for your problem. There's also this cool paper from Google a couple years ago where they tried to use, where they did kind of an evolutionary search and did a search over many, over a very large number of random RNN architectures, they kind of randomly premute these update equations and try putting the additions and the multiplications and the gates and the non linearities in different kinds of combinations. They blasted this out over their huge Google cluster and just tried a whole bunch of these different weigh updates in various flavors. And again, it was the same story that they didn't really find anything that was significantly better than these existing GRU or LSTM styles. Although there were some variations that worked maybe slightly better or worse for certain problems. But kind of the take away is that probably and using an LSTM or GRU is not so much magic in those equations, but this idea of managing gradient flow properly through these additive connections and these multiplicative gates is super useful. So yeah, the summary is that RNNs are super cool. They can allow you to attack tons of new types of problems. They sometimes are susceptible to vanishing or exploding gradients. But we can address that with weight clipping and with fancier architectures. And there's a lot of cool overlap between CNN architectures and RNN architectures. So next time, you'll be taking the midterm. But after that, we'll have a, sorry, a question? Midterm is after this lecture so anything up to this point is fair game. And so you guys, good luck on the midterm on Tuesday.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_9_CNN_Architectures.txt
- All right welcome to lecture nine. So today we will be talking about CNN Architectures. And just a few administrative points before we get started, assignment two is due Thursday. The mid term will be in class on Tuesday May ninth, so next week and it will cover material through Tuesday through this coming Thursday May fourth. So everything up to recurrent neural networks are going to be fair game. The poster session we've decided on a time, it's going to be Tuesday June sixth from twelve to three p.m. So this is the last week of classes. So we have our our poster session a little bit early during the last week so that after that, once you guys get feedback you still have some time to work for your final report which will be due finals week. Okay, so just a quick review of last time. Last time we talked about different kinds of deep learning frameworks. We talked about you know PyTorch, TensorFlow, Caffe2 and we saw that using these kinds of frameworks we were able to easily build big computational graphs, for example very large neural networks and comm nets, and be able to really easily compute gradients in these graphs. So to compute all of the gradients for all the intermediate variables weights inputs and use that to train our models and to run all this efficiently on GPUs And we saw that for a lot of these frameworks the way this works is by working with these modularized layers that you guys have been working writing with, in your home works as well where we have a forward pass, we have a backward pass, and then in our final model architecture, all we need to do then is to just define all of these sequence of layers together. So using that we're able to very easily be able to build up very complex network architectures. So today we're going to talk about some specific kinds of CNN Architectures that are used today in cutting edge applications and research. And so we'll go into depth in some of the most commonly used architectures for these that are winners of ImageNet classification benchmarks. So in chronological order AlexNet, VGG net, GoogLeNet, and ResNet. And so these will go into a lot of depth. And then I'll also after that, briefly go through some other architectures that are not as prominently used these days, but are interesting either from a historical perspective, or as recent areas of research. Okay, so just a quick review. We talked a long time ago about LeNet, which was one of the first instantiations of a comNet that was successfully used in practice. And so this was the comNet that took an input image, used com filters five by five filters applied at stride one and had a couple of conv layers, a few pooling layers and then some fully connected layers at the end. And this fairly simple comNet was very successfully applied to digit recognition. So AlexNet from 2012 which you guys have also heard already before in previous classes, was the first large scale convolutional neural network that was able to do well on the ImageNet classification task so in 2012 AlexNet was entered in the competition, and was able to outperform all previous non deep learning based models by a significant margin, and so this was the comNet that started the spree of comNet research and usage afterwards. And so the basic comNet AlexNet architecture is a conv layer followed by pooling layer, normalization, com pool norm, and then a few more conv layers, a pooling layer, and then several fully connected layers afterwards. So this actually looks very similar to the LeNet network that we just saw. There's just more layers in total. There is five of these conv layers, and two fully connected layers before the final fully connected layer going to the output classes. So let's first get a sense of the sizes involved in the AlexNet. So if we look at the input to the AlexNet this was trained on ImageNet, with inputs at a size 227 by 227 by 3 images. And if we look at this first layer which is a conv layer for the AlexNet, it's 11 by 11 filters, 96 of these applied at stride 4. So let's just think about this for a moment. What's the output volume size of this first layer? And there's a hint. So remember we have our input size, we have our convolutional filters, ray. And we have this formula, which is the hint over here that gives you the size of the output dimensions after applying com right? So remember it was the full image, minus the filter size, divided by the stride, plus one. So given that that's written up here for you 55, does anyone have a guess at what's the final output size after this conv layer? [student speaks off mic] - So I had 55 by 55 by 96, yep. That's correct. Right so our spatial dimensions at the output are going to be 55 in each dimension and then we have 96 total filters so the depth after our conv layer is going to be 96. So that's the output volume. And what's the total number of parameters in this layer? So remember we have 96 11 by 11 filters. [student speaks off mic] - [Lecturer] 96 by 11 by 11, almost. So yes, so I had another by three, yes that's correct. So each of the filters is going to see through a local region of 11 by 11 by three, right because the input depth was three. And so, that's each filter size, times we have 96 of these total. And so there's 35K parameters in this first layer. Okay, so now if we look at the second layer this is a pooling layer right and in this case we have three three by three filters applied at stride two. So what's the output volume of this layer after pooling? And again we have a hint, very similar to the last question. Okay, 27 by 27 by 96. Yes that's correct. Right so the pooling layer is basically going to use this formula that we had here. Again because these are pooling applied at a stride of two so we're going to use the same formula to determine the spatial dimensions and so the spatial dimensions are going to be 27 by 27, and pooling preserves the depth. So we had 96 as depth as input, and it's still going to be 96 depth at output. And next question. What's the number of parameters in this layer? I hear some muttering. [student answers off mic] - Nothing. Okay. Yes, so pooling layer has no parameters, so, kind of a trick question. Okay, so we can basically, yes, question? [student speaks off mic] - The question is, why are there no parameters in the pooling layer? The parameters are the weights right, that we're trying to learn. And so convolutional layers have weights that we learn but pooling all we do is have a rule, we look at the pooling region, and we take the max. So there's no parameters that are learned. So we can keep on doing this and you can just repeat the process and it's kind of a good exercise to go through this and figure out the sizes, the parameters, at every layer. And so if you do this all the way, you can look at this is the final architecture that you can work with. There's 11 by 11 filters at the beginning, then five by five and some three by three filters. And so these are generally pretty familiar looking sizes that you've seen before and then at the end we have a couple of fully connected layers of size 4096 and finally the last layer, is FC8 going to the soft max, which is going to the 1000 ImageNet classes. And just a couple of details about this, it was the first use of the ReLu non-linearity that we've talked about that's the most commonly used non-linearity. They used local response normalization layers basically trying to normalize the response across neighboring channels but this is something that's not really used anymore. It turned out not to, other people showed that it didn't have so much of an effect. There's a lot of heavy data augmentation, and so you can look in the paper for more details, but things like flipping, jittering, cropping, color normalization all of these things which you'll probably find useful for you when you're working on your projects for example, so a lot of data augmentation here. They also use dropout batch size of 128, and learned with SGD with momentum which we talked about in an earlier lecture, and basically just started with a base learning rate of 1e negative 2. Every time it plateaus, reduce by a factor of 10 and then just keep going. Until they finish training and a little bit of weight decay and in the end, in order to get the best numbers they also did an ensembling of models and so training multiple of these, averaging them together and this also gives an improvement in performance. And so one other thing I want to point out is that if you look at this AlexNet diagram up here, it looks kind of like the normal comNet diagrams that we've been seeing, except for one difference, which is that it's, you can see it's kind of split in these two different rows or columns going across. And so the reason for this is mostly historical note, so AlexNet was trained on GTX580 GPUs older GPUs that only had three gigs of memory. So it couldn't actually fit this entire network on here, and so what they ended up doing, was they spread the network across two GPUs. So on each GPU you would have half of the neurons, or half of the feature maps. And so for example if you look at this first conv layer, we have 55 by 55 by 96 output, but if you look at this diagram carefully, you can zoom in later in the actual paper, you can see that, it's actually only 48 depth-wise, on each GPU, and so they just spread it, the feature maps, directly in half. And so what happens is that for most of these layers, for example com one, two, four and five, the connections are only with feature maps on the same GPU, so you would take as input, half of the feature maps that were on the the same GPU as before and you don't look at the full 96 feature maps for example. You just take as input the 48 in that first layer. And then there's a few layers so com three, as well as FC six, seven and eight, where here are the GPUs do talk to each other and so there's connections with all feature maps in the preceding layer. so there's communication across the GPUs, and each of these neurons are then connected to the full depth of the previous input layer. Question. - [Student] It says the full simplified AlexNetwork architecture. [mumbles] - Oh okay, so the question is why does it say full simplified AlexNet architecture here? It just says that because I didn't put all the details on here, so for example this is the full set of layers in the architecture, and the strides and so on, but for example the normalization layer, there's other, these details are not written on here. And then just one little note, if you look at the paper and try and write out the math and architectures and so on, there's a little bit of an issue on the very first layer they'll say if you'll look in the figure they'll say 224 by 224 , but there's actually some kind of funny pattern going on and so the numbers actually work out if you look at it as 227. AlexNet was the winner of the ImageNet classification benchmark in 2012, you can see that it cut the error rate by quite a large margin. It was the first CNN base winner, and it was widely used as a base to our architecture almost ubiquitously from then until a couple years ago. It's still used quite a bit. It's used in transfer learning for lots of different tasks and so it was used for basically a long time, and it was very famous and now though there's been some more recent architectures that have generally just had better performance and so we'll talk about these next and these are going to be the more common architectures that you'll be wanting to use in practice. So just quickly first in 2013 the ImageNet challenge was won by something called a ZFNet. Yes, question. [student speaks off mic] - So the question is intuition why AlexNet was so much better than the ones that came before, DefLearning comNets [mumbles] this is just a very different kind of approach in architecture. So this was the first deep learning based approach first comNet that was used. So in 2013 the challenge was won by something called a ZFNet [Zeller Fergus Net] named after the creators. And so this mostly was improving hyper parameters over the AlexNet. It had the same number of layers, the same general structure and they made a few changes things like changing the stride size, different numbers of filters and after playing around with these hyper parameters more, they were able to improve the error rate. But it's still basically the same idea. So in 2014 there are a couple of architectures that were now more significantly different and made another jump in performance, and the main difference with these networks first of all was much deeper networks. So from the eight layer network that was in 2012 and 2013, now in 2014 we had two very close winners that were around 19 layers and 22 layers. So significantly deeper. And the winner of this was GoogleNet, from Google but very close behind was something called VGGNet from Oxford, and on actually the localization challenge VGG got first place in some of the other tracks. So these were both very, very strong networks. So let's first look at VGG in a little bit more detail. And so the VGG network is the idea of much deeper networks and with much smaller filters. So they increased the number of layers from eight layers in AlexNet right to now they had models with 16 to 19 layers in VGGNet. And one key thing that they did was they kept very small filter so only three by three conv all the way, which is basically the smallest com filter size that is looking at a little bit of the neighboring pixels. And they just kept this very simple structure of three by three convs with the periodic pooling all the way through the network. And it's very simple elegant network architecture, was able to get 7.3% top five error on the ImageNet challenge. So first the question of why use smaller filters. So when we take these small filters now we have fewer parameters and we try and stack more of them instead of having larger filters, have smaller filters with more depth instead, have more of these filters instead, what happens is that you end up having the same effective receptive field as if you only have one seven by seven convolutional layer. So here's a question, what is the effective receptive field of three of these three by three conv layers with stride one? So if you were to stack three three by three conv layers with Stride one what's the effective receptive field, the total area of the input, spatial area of the input that enure at the top layer of the three layers is looking at. So I heard fifteen pixels, why fifteen pixels? - [Student] Okay, so the reason given was because they overlap-- - Okay, so the reason given was because they overlap. So it's on the right track. What actually is happening though is you have to see, at the first layer, the receptive field is going to be three by three right? And then at the second layer, each of these neurons in the second layer is going to look at three by three other first layer filters, but the corners of these three by three have an additional pixel on each side, that is looking at in the original input layer. So the second layer is actually looking at five by five receptive field and then if you do this again, the third layer is looking at three by three in the second layer but this is going to, if you just draw out this pyramid is looking at seven by seven in the input layer. So the effective receptive field here is going to be seven by seven. Which is the same as one seven by seven conv layer. So what happens is that this has the same effective receptive field as a seven by seven conv layer but it's deeper. It's able to have more non-linearities in there, and it's also fewer parameters. So if you look at the total number of parameters, each of these conv filters for the three by threes is going to have nine parameters in each conv [mumbles] three times three, and then times the input depth, so three times three times C, times this total number of output feature maps, which is again C is we're going to preserve the total number of channels. So you get three times three, times C times C for each of these layers, and we have three layers so it's going to be three times this number, compared to if you had a single seven by seven layer then you get, by the same reasoning, seven squared times C squared. So you're going to have fewer parameters total, which is nice. So now if we look at this full network here there's a lot of numbers up here that you can go back and look at more carefully but if we look at all of the sizes and number of parameters the same way that we calculated the example for AlexNet, this is a good exercise to go through, we can see that you know going the same way we have a couple of these conv layers and a pooling layer a couple more conv layers, pooling layer, several more conv layers and so on. And so this just keeps going up. And if you counted the total number of convolutional and fully connected layers, we're going to have 16 in this case for VGG 16, and then VGG 19, it's just a very similar architecture, but with a few more conv layers in there. And so the total memory usage of this network, so just making a forward pass through counting up all of these numbers so in the memory numbers here written in terms of the total numbers, like we calculated earlier, and if you look at four bytes per number, this is going to be about 100 megs per image, and so this is the scale of the memory usage that's happening and this is only for a forward pass right, when you do a backward pass you're going to have to store more and so this is pretty heavy memory wise. 100 megs per image, if you have on five gigs of total memory, then you're only going to be able to store about 50 of these. And so also the total number of parameters here we have is 138 million parameters in this network, and this compares with 60 million for AlexNet. Question? [student speaks off mic] - So the question is what do we mean by deeper, is it the number of filters, number of layers? So deeper in this case is always referring to layers. So there are two usages of the word depth which is confusing one is the depth rate per channel, width by height by depth, you can use the word depth here, but in general we talk about the depth of a network, this is going to be the total number of layers in the network, and usually in particular we're counting the total number of weight layers. So the total number of layers with trainable weight, so convolutional layers and fully connected layers. [student mumbles off mic] - Okay, so the question is, within each layer what do different filters need? And so we talked about this back in the comNet lecture, so you can also go back and refer to that, but each filter is a set of let's say three by three convs, so each filter is looking at a, is a set of weight looking at a three by three value input input depth, and this produces one feature map, one activation map of all the responses of the different spatial locations. And then we have we can have as many filters as we want right so for example 96 and each of these is going to produce a feature map. And so it's just like each filter corresponds to a different pattern that we're looking for in the input that we convolve around and we see the responses everywhere in the input, we create a map of these and then another filter will we convolve over the image and create another map. Question. [student speaks off mic] - So question is, is there intuition behind, as you go deeper into the network we have more channel depth so more number of filters right and so you can have any design that you want so you don't have to do this. In practice you will see this happen a lot of the times and one of the reasons is people try and maintain kind of a relatively constant level of compute, so as you go higher up or deeper into your network, you're usually also using basically down sampling and having smaller total spatial area and then so then they also increase now you increase by depth a little bit, it's not as expensive now to increase by depth because it's spatially smaller and so, yeah that's just a reason. Question. [student speaks off mic] - So performance-wise is there any reason to use SBN [mumbles] instead of SouthMax [mumbles], so no, for a classifier you can use either one, and you did that earlier in the class as well, but in general SouthMax losses, have generally worked well and been standard use for classification here. Okay yeah one more question. [student mumbles off mic] - Yes, so the question is, we don't have to store all of the memory like we can throw away the parts that we don't need and so on? And yes this is true. Some of this you don't need to keep, but you're also going to be doing a backwards pass through ware for the most part, when you were doing the chain rule and so on you needed a lot of these activations as part of it and so in large part a lot of this does need to be kept. So if we look at the distribution of where memory is used and where parameters are, you can see that a lot of memories in these early layers right where you still have spatial dimensions you're going to have more memory usage and then a lot of the parameters are actually in the last layers, the fully connected layers have a huge number of parameters right, because we have all of these dense connections. And so that's something just to know and then keep in mind so later on we'll see some networks actually get rid of these fully connected layers and be able to save a lot on the number of parameters. And then just one last thing to point out, you'll also see different ways of calling all of these layers right. So here I've written out exactly what the layers are. conv3-64 means three by three convs with 64 total filters. But for VGGNet on this diagram on the right here there's also common ways that people will look at each group of filters, so each orange block here, as in conv1 part one, so conv1-1, conv1-2, and so on. So just something to keep in mind. So VGGNet ended up getting second place in the ImageNet 2014 classification challenge, first in localization. They followed a very similar training procedure as Alex Krizhevsky for the AlexNet. They didn't use local response normalization, so as I mentioned earlier, they found out this didn't really help them, and so they took it out. You'll see VGG 16 and VGG 19 are common variants of the cycle here, and this is just the number of layers, 19 is slightly deeper than 16. In practice VGG 19 works very little bit better, and there's a little bit more memory usage, so you can use either but 16 is very commonly used. For best results, like AlexNet, they did ensembling in order to average several models, and you get better results. And they also showed in their work that the FC7 features of the last fully connected layer before going to the 1000 ImageNet classes. The 4096 size layer just before that, is a good feature representation, that can even just be used as is, to extract these features from other data, and generalized these other tasks as well. And so FC7 is a good feature representation. Yeah question. [student speaks off mic] - Sorry what was the question? Okay, so the question is what is localization here? And so this is a task, and we'll talk about it a little bit more in a later lecture on detection and localization so I don't want to go into detail here but it's basically an image, not just classifying What's the class of the image, but also drawing a bounding box around where that object is in the image. And the difference with detection, which is a very related task is that detection there can be multiple instances of this object in the image localization we're assuming there's just one, this classification but we just how this additional bounding box. So we looked at VGG which was one of the deep networks from 2014 and then now we'll talk about GoogleNet which was the other one that won the classification challenge. So GoogleNet again was a much deeper network with 22 layers but one of the main insights and special things about GoogleNet is that it really looked at this problem of computational efficiency and it tried to design a network architecture that was very efficient in the amount of compute. And so they did this using this inception module which we'll go into more detail and basically stacking a lot of these inception modules on top of each other. There's also no fully connected layers in this network, so they got rid of that were able to save a lot of parameters and so in total there's only five million parameters which is twelve times less than AlexNet, which had 60 million even though it's much deeper now. It got 6.7% top five error. So what's the inception module? So the idea behind the inception module is that they wanted to design a good local network typology and it has this idea of this local topology that's you know you can think of it as a network within a network and then stack a lot of these local typologies one on top of each other. And so in this local network that they're calling an inception module what they're doing is they're basically applying several different kinds of filter operations in parallel on top of the same input coming into this same layer. So we have our input coming in from the previous layer and then we're going to do different kinds of convolutions. So a one by one conv, right a three by three conv, five by five conv, and then they also have a pooling operation in this case three by three pooling, and so you get all of these different outputs from these different layers, and then what they do is they concatenate all these filter outputs together depth wise, and so then this creates one tenser output at the end that is going tom pass on to the next layer. So if we look at just a naive way of doing this we just do exactly that we have all of these different operations we get the outputs we concatenate them together. So what's the problem with this? And it turns out that computational complexity is going to be a problem here. So if we look more carefully at an example, so here just for as an example I've put one by one conv, 128 filter so three by three conv 192 filters, five by five convs and 96 filters. Assume everything has basically the stride that's going to maintain the spatial dimensions, and that we have this input coming in. So what is the output size of the one by one filter with 128 , one by one conv with 128 filters? Who has a guess? OK so I heard 28 by 28, by 128 which is correct. So right by one by one conv we're going to maintain spatial dimensions and then on top of that, each conv filter is going to look through the entire 256 depth of the input, but then the output is going to be, we have a 28 by 28 feature map for each of the 128 filters that we have in this conv layer. So we get 28 by 28 by 128. OK and then now if we do the same thing and we look at the filter sizes of the output sizes sorry of all of the different filters here, after the three by three conv we're going to have this volume of 28 by 28 by 192 right after five by five conv we have 96 filters here. So 28 by 28 by 96, and then out pooling layer is just going to keep the same spatial dimension here, so pooling layer will preserve it in depth, and here because of our stride, we're also going to preserve our spatial dimensions. And so now if we look at the output size after filter concatenation what we're going to get is 28 by 28, these are all 28 by 28, and we concatenating depth wise. So we get 28 by 28 times all of these added together, and the total output size is going to be 28 by 28 by 672. So the input to our inception module was 28 by 28 by 256, then the output from this module is 28 by 28 by 672. So we kept the same spatial dimensions, and we blew up the depth. Question. [student speaks off mic] OK So in this case, yeah, the question is, how are we getting 28 by 28 for everything? So here we're doing all the zero padding in order to maintain the spatial dimensions, and that way we can do this filter concatenation depth-wise. Question in the back. [student speaks off mic] - OK The question is what's the 256 deep at the input, and so this is not the input to the network, this is the input just to this local module that I'm looking at. So in this case 256 is the depth of the previous inception module that came just before this. And so now coming out we have 28 by 28 by 672, and that's going to be the input to the next inception module. Question. [student speaks off mic] - Okay the question is, how did we get 28 by 28 by 128 for the first one, the first conv, and this is basically it's a one by one convolution right, so we're going to take this one by one convolution slide it across our 28 by 28 by 256 input spatially where it's at each location, it's going to multiply, it's going to do a [mumbles] through the entire 256 depth, and so we do this one by one conv slide it over spatially and we get a feature map out that's 28 by 28 by one. There's one number at each spatial location coming out, and each filter produces one of these 28 by 28 by one maps, and we have here a total 128 filters, and that's going to produce 28 by 28, by 128. OK so if you look at the number of operations that are happening in the convolutional layer, let's look at the first one for example this one by one conv as I was just saying at each each location we're doing a one by one by 256 dot product. So there's 256 multiply operations happening here and then for each filter map we have 28 by 28 spatial locations, so that's the first 28 times 28 first two numbers that are multiplied here. These are the spatial locations for each filter map, and so we have to do this to 25 60 multiplication each one of these then we have 128 total filters at this layer, or we're producing 128 total feature maps. And so the total number of these operations here is going to be 28 times 28 times 128 times 256. And so this is going to be the same for, you can think about this for the three by three conv, and the five by five conv, that's exactly the same principle. And in total we're going to get 854 million operations that are happening here. - [Student] And the 128, 192, and 96 are just values [mumbles] - Question the 128, 192 and 256 are values that I picked. Yes, these are not values that I just came up with. They are similar to the ones that you will see in like a particular layer of inception net, so in GoogleNet basically, each module has a different set of these kinds of parameters, and I picked one that was similar to one of these. And so this is very expensive computationally right, these these operations. And then the other thing that I also want to note is that the pooling layer also adds to this problem because it preserves the whole feature depth. So at every layer your total depth can only grow right, you're going to take the full featured depth from your pooling layer, as well as all the additional feature maps from the conv layers and add these up together. So here our input was 256 depth and our output is 672 depth and you're just going to keep increasing this as you go up. So how do we deal with this and how do we keep this more manageable? And so one of the key insights that GoogleNet used was that well we can we can address this by using bottleneck layers and try and project these feature maps to lower dimension before our our convolutional operations, so before our expensive layers. And so what exactly does that mean? So reminder one by one convolution, I guess we were just going through this but it's taking your input volume, it's performing a dot product at each spatial location and what it does is it preserves spatial dimension but it reduces the depth and it reduces that by projecting your input depth to a lower dimension. It just takes it's basically like a linear combination of your input feature maps. And so this main idea is that it's projecting your depth down and so the inception module takes these one by one convs and adds these at a bunch of places in these modules where there's going to be, in order to alleviate this expensive compute. So before the three by three and five by five conv layers, it puts in one of these one by one convolutions. And then after the pooling layer it also puts an additional one by one convolution. Right so these are the one by one bottleneck layers that are added in. And so how does this change the math that we were looking at earlier? So now basically what's happening is that we still have the same input here 28 by 28 by 256, but these one by one convs are going to reduce the depth dimension and so you can see before the three by three convs, if I put a one by one conv with 64 filters, my output from that is going to be, 28 by 28 by 64. So instead of now going into the three by three convs afterwards instead of 28 by 28 by 256 coming in, we only have a 28 by 28, by 64 block coming in. And so this is now reducing the smaller input going into these conv layers, the same thing for the five by five conv, and then for the pooling layer, after the pooling comes out, we're going to reduce the depth after this. And so, if you work out the math the same way for all of the convolutional ops here, adding in now all these one by one convs on top of the three by threes and five by fives, the total number of operations is 358 million operations, so it's much less than the 854 million that we had in the naive version, and so you can see how you can use this one by one conv, and the filter size for that to control your computation. Yes, question in the back. [student speaks off mic] - Yes, so the question is, have you looked into what information might be lost by doing this one by one conv at the beginning. And so there might be some information loss, but at the same time if you're doing these projections you're taking a linear combination of these input feature maps which has redundancy in them, you're taking combinations of them, and you're also introducing an additional non-linearity after the one by one conv, so it also actually helps in that way with adding a little bit more depth and so, I don't think there's a rigorous analysis of this, but basically in general this works better and there's reasons why it helps as well. OK so here we have, we're basically using these one by one convs to help manage our computational complexity, and then what GooleNet does is it takes these inception modules and it's going to stack all these together. So this is a full inception architecture. And if we look at this a little bit more detail, so here I've flipped it, because it's so big, it's not going to fit vertically any more on the slide. So what we start with is we first have this stem network, so this is more the kind of vanilla plain conv net that we've seen earlier [mumbles] six sequence of layers. So conv pool a couple of convs in another pool just to get started and then after that we have all of our different our multiple inception modules all stacked on top of each other, and then on top we have our classifier output. And notice here that they've really removed the expensive fully connected layers it turns out that the model works great without them, even and you reduce a lot of parameters. And then what they also have here is, you can see these couple of extra stems coming out and these are auxiliary classification outputs and so these are also you know just a little mini networks with an average pooling, a one by one conv, a couple of fully connected layers here going to the soft Max and also a 1000 way SoftMax with the ImageNet classes. And so you're actually using your ImageNet training classification loss in three separate places here. The standard end of the network, as well as in these two places earlier on in the network, and the reason they do that is just this is a deep network and they found that having these additional auxiliary classification outputs, you get more gradient training injected at the earlier layers, and so more just helpful signal flowing in because these intermediate layers should also be helpful. You should be able to do classification based off some of these as well. And so this is the full architecture, there's 22 total layers with weights and so within each of these modules each of those one by one, three by three, five by five is a weight layer, just including all of these parallel layers, and in general it's a relatively more carefully designed architecture and part of this is based on some of these intuitions that we're talking about and part of them also is just you know Google the authors they had huge clusters and they're cross validating across all kinds of design choices and this is what ended up working well. Question? [student speaks off mic] - Yeah so the question is, are the auxiliary outputs actually useful for the final classification, to use these as well? I think when they're training them they do average all these for the losses coming out. I think they are helpful. I can't remember if in the final architecture, whether they average all of these or just take one, it seems very possible that they would use all of them, but you'll need to check on that. [student speaks off mic] - So the question is for the bottleneck layers, is it possible to use some other types of dimensionality reduction and yes you can use other kinds of dimensionality reduction. The benefits here of this one by one conv is, you're getting this effect, but it's all, you know it's a conv layer just like any other. You have the soul network of these, you just train it this full network back [mumbles] through everything, and it's learning how to combine the previous feature maps. Okay yeah, question in the back. [student speaks off mic] - Yes so, question is are any weights shared or all they all separate and yeah, all of these layers have separate weights. Question. [student speaks off mic] - Yes so the question is why do we have to inject gradients at earlier layers? So our classification output at the very end, where we get a gradient on this, it's passed all the way back through the chain roll but the problem is when you have very deep networks and you're going all the way back through these, some of this gradient signal can become minimized and lost closer to the beginning, and so that's why having these additional ones in earlier parts can help provide some additional signal. [student mumbles off mic] - So the question is are you doing back prop all the times for each output. No it's just one back prop all the way through, and you can think of these three, you can think of there being kind of like an addition at the end of these if you were to draw up your computational graph, and so you get your final signal and you can just take all of these gradients and just back plot them all the way through. So it's as if they were added together at the end in a computational graph. OK so in the interest of time because we still have a lot to get through, can take other questions offline. Okay so GoogleNet basically 22 layers. It has an efficient inception module, there's no fully connected layers. 12 times fewer parameters than AlexNet, and it's the ILSVRC 2014 classification winner. And so now let's look at the 2015 winner, which is the ResNet network and so here this idea is really, this revolution of depth net right. We were starting to increase depth in 2014, and here we've just had this hugely deeper model at 152 layers was the ResNet architecture. And so now let's look at that in a little bit more detail. So the ResNet architecture, is getting extremely deep networks, much deeper than any other networks before and it's doing this using this idea of residual connections which we'll talk about. And so, they had 152 layer model for ImageNet. They were able to get 3.5 of 7% top 5 error with this and the really special thing is that they swept all classification and detection contests in the ImageNet mart benchmark and this other benchmark called COCO. It just basically won everything. So it was just clearly better than everything else. And so now let's go into a little bit of the motivation behind ResNet and residual connections that we'll talk about. And the question that they started off by trying to answer is what happens when we try and stack deeper and deeper layers on a plain convolutional neural network? So if we take something like VGG or some normal network that's just stacks of conv and pool layers on top of each other can we just continuously extend these, get deeper layers and just do better? And and the answer is no. So if you so if you look at what happens when you get deeper, so here I'm comparing a 20 layer network and a 56 layer network and so this is just a plain kind of network you'll see that in the test error here on the right the 56 layer network is doing worse than the 28 layer network. So the deeper network was not able to do better. But then the really weird thing is now if you look at the training error right we here have again the 20 layer network and a 56 layer network. The 56 layer network, one of the obvious problems you think, I have a really deep network, I have tons of parameters maybe it's probably starting to over fit at some point. But what actually happens is that when you're over fitting you would expect to have very good, very low training error rate, and just bad test error, but what's happening here is that in the training error the 56 layer network is also doing worse than the 20 layer network. And so even though the deeper model performs worse, this is not caused by over-fitting. And so the hypothesis of the ResNet creators is that the problem is actually an optimization problem. Deeper models are just harder to optimize, than more shallow networks. And the reasoning was that well, a deeper model should be able to perform at least as well as a shallower model. You can have actually a solution by construction where you just take the learned layers from your shallower model, you just copy these over and then for the remaining additional deeper layers you just add identity mappings. So by construction this should be working just as well as the shallower layer. And your model that weren't able to learn properly, it should be able to learn at least this. And so motivated by this their solution was well how can we make it easier for our architecture, our model to learn these kinds of solutions, or at least something like this? And so their idea is well instead of just stacking all these layers on top of each other and having every layer try and learn some underlying mapping of a desired function, lets instead have these blocks, where we try and fit a residual mapping, instead of a direct mapping. And so what this looks like is here on this right where the input to these block is just the input coming in and here we are going to use our, here on the side, we're going to use our layers to try and fit some residual of our desire to H of X, minus X instead of the desired function H of X directly. And so basically at the end of this block we take the step connection on this right here, this loop, where we just take our input, we just use pass it through as an identity, and so if we had no weight layers in between it was just going to be the identity it would be the same thing as the output, but now we use our additional weight layers to learn some delta, for some residual from our X. And so now the output of this is going to be just our original R X plus some residual that we're going to call it. It's basically a delta and so the idea is that now the output it should be easy for example, in the case where identity is ideal, to just squash all of these weights of F of X from our weight layers just set it to all zero for example, then we're just going to get identity as the output, and we can get something, for example, close to this solution by construction that we had earlier. Right, so this is just a network architecture that says okay, let's try and fit this, learn how our weight layers residual, and be something close, that way it'll more likely be something close to X, it's just modifying X, than to learn exactly this full mapping of what it should be. Okay, any questions about this? [student speaks off mic] - Question is is there the same dimension? So yes these two paths are the same dimension. In general either it's the same dimension, or what they actually do is they have these projections and shortcuts and they have different ways of padding to make things work out to be the same dimension. Depth wise. Yes - [Student] When you use the word residual you were talking about [mumbles off mic] - So the question is what exactly do we mean by residual this output of this transformation is a residual? So we can think of our output here right as this F of X plus X, where F of X is the output of our transformation and then X is our input, just passed through by the identity. So we'd like to using a plain layer, what we're trying to do is learn something like H of X, but what we saw earlier is that it's hard to learn H of X. It's a good H of X as we get very deep networks. And so here the idea is let's try and break it down instead of as H of X is equal to F of X plus, and let's just try and learn F of X. And so instead of learning directly this H of X we just want to learn what is it that we need to add or subtract to our input as we move on to the next layer. So you can think of it as kind of modifying this input, in place in a sense. We have-- [interrupted by student mumbling off mic] - The question is, when we're saying the word residual are we talking about F of X? Yeah. So F of X is what we're calling the residual. And it just has that meaning. Yes another question. [student mumbles off mic] - So the question is in practice do we just sum F of X and X together, or do we learn some weighted combination and you just do a direct sum. Because when you do a direct sum, this is the idea of let me just learn what is it I have to add or subtract onto X. Is this clear to everybody, the main intuition? Question. [student speaks off mic] - Yeah, so the question is not clear why is it that learning the residual should be easier than learning the direct mapping? And so this is just their hypotheses, and a hypotheses is that if we're learning the residual you just have to learn what's the delta to X right? And if our hypotheses is that generally even something like our solution by construction, where we had some number of these shallow layers that were learned and we had all these identity mappings at the top this was a solution that should have been good, and so that implies that maybe a lot of these layers, actually something just close to identity, would be a good layer And so because of that, now we formulate this as being able to learn the identity plus just a little delta. And if really the identity is best we just make F of X squashes transformation to just be zero, which is something that's relatively, might seem easier to learn, also we're able to get things that are close to identity mappings. And so again this is not something that's necessarily proven or anything it's just the intuition and hypothesis, and then we'll also see later some works where people are actually trying to challenge this and say oh maybe it's not actually the residuals that are so necessary, but at least this is the hypothesis for this paper, and in practice using this model, it was able to do very well. Question. [student speaks off mic] - Yes so the question is have people tried other ways of combining the inputs from previous layers and yes so this is basically a very active area of research on and how we formulate all these connections, and what's connected to what in all of these structures. So we'll see a few more examples of different network architectures briefly later but this is an active area of research. OK so we basically have all of these residual blocks that are stacked on top of each other. We can see the full resident architecture. Each of these residual blocks has two three by three conv layers as part of this block and there's also been work just saying that this happens to be a good configuration that works well. We stack all these blocks together very deeply. Another thing like with this very deep architecture it's basically also enabling up to 150 layers deep of this, and then what we do is we stack all these and periodically we also double the number of filters and down sample spatially using stride two when we do that. And then we have this additional [mumbles] at the very beginning of our network and at the end we also hear, don't have any fully connected layers and we just have a global average pooling layer that's going to average over everything spatially, and then be input into the last 1000 way classification. So this is the full ResNet architecture and it's very simple and elegant just stacking up all of these ResNet blocks on top of each other, and they have total depths of up to 34, 50, 100, and they tried up to 152 for ImageNet. OK so one additional thing just to know is that for a very deep network, so the ones that are more than 50 layers deep, they also use bottleneck layers similar to what GoogleNet did in order to improve efficiency and so within each block now you're going to, what they did is, have this one by one conv filter, that first projects it down to a smaller depth. So again if we are looking at let's say 28 by 28 by 256 implant, we do this one by one conv, it's taking it's projecting the depth down. We get 28 by 28 by 64. Now your convolution your three by three conv, in here they only have one, is operating over this reduced step so it's going to be less expensive, and then afterwards they have another one by one conv that projects the depth back up to 256, and so, this is the actual block that you'll see in deeper networks. So in practice the ResNet also uses batch normalization after every conv layer, they use Xavier initialization with an extra scaling factor that they helped introduce to improve the initialization trained with SGD + momentum. Their learning rate they use a similar learning rate type of schedule where you decay your learning rate when your validation error plateaus. Mini batch size 256, a little bit of weight decay and no drop out. And so experimentally they were able to show that they were able to train these very deep networks, without degrading. They were able to have basically good gradient flow coming all the way back down through the network. They tried up to 152 layers on ImageNet, 1200 on Cifar, which is a, you have played with it, but a smaller data set and they also saw that now you're deeper networks are able to achieve lower training errors as expected. So you don't have the same strange plots that we saw earlier where the behavior was in the wrong direction. And so from here they were able to sweep first place at all of the ILSVRC competitions, and all of the COCO competitions in 2015 by a significant margins. Their total top five error was 3.6 % for a classification and this is actually better than human performance in the ImageNet paper. There was also a human metric that came from actually [mumbles] our lab Andre Kapathy spent like a week training himself and then basically did all of, did this task himself and was I think somewhere around 5-ish %, and so I was basically able to do better than the then that human at least. Okay, so these are kind of the main networks that have been used recently. We had AlexNet starting off with first, VGG and GoogleNet are still very popular, but ResNet is the most recent best performing model that if you're looking for something training a new network ResNet is available, you should try working with it. So just quickly looking at some of this getting a better sense of the complexity involved. So here we have some plots that are sorted by performance so this is top one accuracy here, and higher is better. And so you'll see a lot of these models that we talked about, as well as some different versions of them so, this GoogleNet inception thing, I think there's like V2, V3 and the best one here is V4, which is actually a ResNet plus inception combination, so these are just kind of more incremental, smaller changes that they've built on top of them, and so that's the best performing model here. And if we look on the right, these plots of their computational complexity here it's sorted. The Y axis is your top one accuracy so higher is better. The X axis is your operations and so the more to the right, the more ops you're doing, the more computationally expensive and then the bigger the circle, your circle is your memory usage, so the gray circles are referenced here, but the bigger the circle the more memory usage and so here we can see that VGG these green ones are kind of the least efficient. They have the biggest memory, the most operations, but they they do pretty well. GoogleNet is the most efficient here. It's way down on the operation side, as well as a small little circle for memory usage. AlexNet, our earlier model, has lowest accuracy. It's relatively smaller compute, because it's a smaller network, but it's also not particularly memory efficient. And then ResNet here, we have moderate efficiency. It's kind of in the middle, both in terms of memory and operations, and it has the highest accuracy. And so here also are some additional plots. You can look at these more on your own time, but this plot on the left is showing the forward pass time and so this is in milliseconds and you can up at the top VGG forward passes about 200 milliseconds you can get about five frames per second with this, and this is sorted in order. There's also this plot on the right looking at power consumption and if you look more at this paper here, there's further analysis of these kinds of computational comparisons. So these were the main architectures that you should really know in-depth and be familiar with, and be thinking about actively using. But now I'm going just to go briefly through some other architectures that are just good to know either historical inspirations or more recent areas of research. So the first one Network in Network, this is from 2014, and the idea behind this is that we have these vanilla convolutional layers but we also have these, this introduces the idea of MLP conv layers they call it, which are micro networks or basically network within networth, the name of the paper. Where within each conv layer trying to stack an MLP with a couple of fully connected layers on top of just the standard conv and be able to compute more abstract features for these local patches right. So instead of sliding just a conv filter around, it's sliding a slightly more complex hierarchical set of filters around and using that to get the activation maps. And so, it uses these fully connected, or basically one by one conv kind of layers. It's going to stack them all up like the bottom diagram here where we just have these networks within networks stacked in each of the layers. And the main reason to know this is just it was kind of a precursor to GoogleNet and ResNet in 2014 with this idea of bottleneck layers that you saw used very heavily in there. And it also had a little bit of philosophical inspiration for GoogleNet for this idea of a local network typology network in network that they also used, with a different kind of structure. Now I'm going to talk about a series of works, on, or works since ResNet that are mostly geared towards improving resNet and so this is more recent research has been done since then. I'm going to go over these pretty fast, and so just at a very high level. If you're interested in any of these you should look at the papers, to have more details. So the authors of ResNet a little bit later on in 2016 also had this paper where they improved the ResNet block design. And so they basically adjusted what were the layers that were in the ResNet block path, and showed this new structure was able to have a more direct path in order for propagating information throughout the network, and you want to have a good path to propagate information all the way up, and then back up all the way down again. And so they showed that this new block was better for that and was able to give better performance. There's also a Wide Residual networks which this paper argued that while ResNets made networks much deeper as well as added these residual connections and their argument was that residuals are really the important factor. Having this residual construction, and not necessarily having extremely deep networks. And so what they did was they used wider residual blocks, and so what this means is just more filters in every conv layer. So before we might have F filters per layer and they use these factors of K and said well, every layer it's going to be F times K filters instead. And so, using these wider layers they showed that their 50 layer wide ResNet was able to out-perform the 152 layer original ResNet, and it also had the additional advantages of increasing with this, even with the same amount of parameters, tit's more computationally efficient because you can parallelize these with operations more easily. Right just convolutions with more neurons just spread across more kernels as opposed to depth that's more sequential, so it's more computationally efficient to increase your width. So here you can see this work is starting to trying to understand the contributions of width and depth and residual connections, and making some arguments for one way versus the other. And this other paper around the same time, I think maybe a little bit later, is ResNeXt, and so this is again, the creators of ResNet continuing to work on pushing the architecture. And here they also had this idea of okay, let's indeed tackle this width thing more but instead of just increasing the width of this residual block through more filters they have structure. And so within each residual block, multiple parallel pathways and they're going to call the total number of these pathways the cardinality. And so it's basically taking the one ResNet block with the bottlenecks and having it be relatively thinner, but having multiple of these done in parallel. And so here you can also see that this both have some relation to this idea of wide networks, as well as to has some connection to the inception module as well right where we have these parallel, these layers operating in parallel. And so now this ResNeXt has some flavor of that as well. So another approach towards improving ResNets was this idea called Stochastic Depth and in this work the motivation is well let's look more at this depth problem. Once you get deeper and deeper the typical problems that you're going to have vanishing gradients right. You're not able to, your gradients will get smaller and eventually vanish as you're trying to back propagate them over very long layers, or a large number of layers. And so what their motivation is well let's try to have short networks during training and they use this idea of dropping out a subset of the layers during training. And so for a subset of the layers they just drop out the weights and they just set it to identity connection, and now what you get is you have these shorter networks during training, you can pass back your gradients better. It's also a little more efficient, and then it's kind of like the drop out right. It has this sort of flavor that you've seen before. And then at test time you want to use the full deep network that you've trained. So these are some of the works that looking at the resident architecture, trying to understand different aspects of it and trying to improve ResNet training. And so there's also some works now that are going beyond ResNet that are saying well what are some non ResNet architectures that maybe can also work better, or comparable or better to ResNets. And so one idea is FractalNet, which came out pretty recently, and the argument in FractalNet is that while residual representations maybe are not actually necessary, so this goes back to what we were talking about earlier. What's the motivation of residual networks and it seems to make sense and there's, you know, good reasons for why this should help but in this paper they're saying that well here is a different architecture that we're introducing, there's no residual representations. We think that the key is more about transitioning effectively from shallow to deep networks, and so they have this fractal architecture which has if you look on the right here, these layers where they compose it in this fractal fashion. And so there's both shallow and deep pathways to your output. And so they have these different length pathways, they train them with dropping out sub paths, and so again it has this dropout kind of flavor, and then at test time they'll use the entire fractal network and they show that this was able to get very good performance. There's another idea called Densely Connected convolutional Networks, DenseNet, and this idea is now we have these blocks that are called dense blocks. And within each block each layer is going to be connected to every other layer after it, in this feed forward fashion. So within this block, your input to the block is also the input to every other conv layer, and as you compute each conv output, those outputs are now connected to every layer after and then, these are all concatenated as input to the conv layer, and they do some they have some other processes for reducing the dimensions and keeping efficient. And so their main takeaway from this, is that they argue that this is alleviating a vanishing gradient problem because you have all of these very dense connections. It strengthens feature propagation and then also encourages future use right because there are so many of these connections each feature map that you're learning is input in multiple later layers and being used multiple times. So these are just a couple of ideas that are you know alternatives or what can we do that's not ResNets and yet is still performing either comparably or better to ResNets and so this is another very active area of current research. You can see that a lot of this is looking at the way how different layers are connected to each other and how depth is managed in these networks. And so one last thing that I wanted to mention quickly, is just efficient networks. So this idea of efficiency and you saw that GoogleNet was a work that was looking into this direction of how can we have efficient networks which are important for you know a lot of practical usage both training as well as especially deployment and so this is another recent network that's called SqueezeNet which is looking at very efficient networks. They have these things called fire modules, which consists of a squeeze layer with a lot of one by one filters and then this feeds then into an expand layer with one by one and three by three filters, and they're showing that with this kind of architecture they're able to get AlexNet level accuracy on ImageNet, but with 50 times fewer parameters, and then you can further do network compression on this to get up to 500 times smaller than AlexNet and just have the whole network just be 0.5 megs. And so this is a direction of how do we have efficient networks model compression that we'll cover more in a lecture later, but just giving you a hint of that. OK so today in summary we've talked about different kinds of CNN Architectures. We looked in-depth at four of the main architectures that you'll see in wide usage. AlexNet, one of the early, very popular networks. VGG and GoogleNet which are still widely used. But ResNet is kind of taking over as the thing that you should be looking most when you can. We also looked at these other networks in a little bit more depth at a brief level overview. And so the takeaway that these models that are available they're in a lot of [mumbles] so you can use them when you need them. There's a trend toward extremely deep networks, but there's also significant research now around the design of how do we connect layers, skip connections, what is connected to what, and also using these to design your architecture to improve gradient flow. There's an even more recent trend towards examining what's the necessity of depth versus width, residual connections. Trade offs, what's actually helping matters, and so there's a lot of these recent works in this direction that you can look into some of the ones I pointed out if you are interested. And next time we'll talk about Recurrent neural networks. Thanks.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_16_Adversarial_Examples_and_Adversarial_Training.txt
- Okay, sounds like it is. I'll be telling you about adversarial examples and adversarial training today. Thank you. As an overview, I will start off by telling you what adversarial examples are, and then I'll explain why they happen, why it's possible for them to exist. I'll talk a little bit about how adversarial examples pose real world security threats, that they can actually be used to compromise systems built on machine learning. I'll tell you what the defenses are so far, but mostly defenses are an open research problem that I hope some of you will move on to tackle. And then finally I'll tell you how to use adversarial examples to improve other machine learning algorithms even if you want to build a machine learning algorithm that won't face a real world adversary. Looking at the big picture and the context for this lecture, I think most of you are probably here because you've heard how incredibly powerful and successful machine learning is, that very many different tasks that could not be solved with software before are now solvable thanks to deep learning and convolutional networks and gradient descent. All of these technologies that are working really well. Until just a few years ago, these technologies didn't really work. In about 2013, we started to see that deep learning achieved human level performance at a lot of different tasks. We saw that convolutional nets could recognize objects and images and score about the same as people in those benchmarks, with the caveat that part of the reason that algorithms score as well as people is that people can't tell Alaskan Huskies from Siberian Huskies very well, but modulo the strangeness of the benchmarks deep learning caught up to about human level performance for object recognition in about 2013. That same year, we also saw that object recognition applied to human faces caught up to about human level. That suddenly we had computers that could recognize faces about as well as you or I could recognize faces of strangers. You can recognize the faces of your friends and family better than a computer, but when you're dealing with people that you haven't had a lot of experience with the computer caught up to us in about 2013. We also saw that computers caught up to humans for reading type written fonts in photos in about 2013. It even got the point that we could no longer use CAPTCHAs to tell whether a user of a webpage is human or not because the convolutional network is better at reading obfuscated text than a human is. So with this context today of deep learning working really well especially for computer vision it's a little bit unusual to think about the computer making a mistake. Before about 2013, nobody was ever surprised if the computer made a mistake. That was the rule not the exception, and so today's topic which is all about unusual mistakes that deep learning algorithms make this topic wasn't really a serious avenue of study until the algorithms started to work well most of the time, and now people study the way that they break now that that's actually the exception rather than the rule. An adversarial example is an example that has been carefully computed to be misclassified. In a lot of cases we're able to make the new image indistinguishable to a human observer from the original image. Here, I show you one where we start with a panda. On the left this is a panda that has not been modified in any way, and the convolutional network trained on the image in that dataset is able to recognize it as being a panda. One interesting thing is that the model doesn't have a whole lot of confidence in that decision. It assigns about 60% probability to this image being a panda. If we then compute exactly the way that we could modify the image to cause the convolutional network to make a mistake we find that the optimal direction to move all the pixels is given by this image in the middle. To a human it looks a lot like noise. It's not actually noise. It's carefully computed as a function of the parameters of the network. There's actually a lot of structure there. If we multiply that image of the structured attack by a very small coefficient and add it to the original panda we get an image that a human can't tell from the original panda. In fact, on this slide there is no difference between the panda on the left and the panda on the right. When we present the image to convolutional network we use 32-bit floating point values. The monitor here can only display eight bits of color resolution, and we have made a change that's just barely too small to affect the smallest of those eight bits, but it effects the other 24 of the 32-bit floating point representation, and that little tiny change is enough to fool the convolutional network into recognizing this image of a panda as being a gibbon. Another interesting thing is that it doesn't just change the class. It's not that we just barely found the decision boundary and just barely stepped across it. The convolutional network actually has much more confidence in its incorrect prediction, that the image on the right is a gibbon, than it had for the original being a panda. On the right, it believes that the image is a gibbon with 99.9% probability, so before it thought that there was about 1/3 chance that it was something other than a panda, and now it's about as certain as it can possibly be that it's a gibbon. As a little bit of history, people have studied ways of computing attacks to fool different machine learning models since at least about 2004, and maybe earlier. For a long time this was done in the context of fooling spam detectors. In about 2013, Battista Biggio found that you could fool neural networks in this way, and around the same time my colleague, Christian Szegedy, found that you could make this kind of attack against deep neural networks just by using an optimization algorithm to search on the input of the image. A lot of what I'll be telling you about today is my own follow-up work on this topic, but I've spent a lot of my career over the past few years understanding why these attacks are possible and why it's so easy to fool these convolutional networks. When my colleague, Christian, first discovered this phenomenon independently from Battista Biggio but around the same time, he found that it was actually a result of a visualization he was trying to make. He wasn't studying security. He wasn't studying how to fool a neural network. Instead, he had a convolutional network that could recognize objects very well, and he wants to understand how it worked, so he thought that maybe he could take an image of a scene, for example a picture of a ship, and he could gradually transform that image into something that the network would recognize as being an airplane. Over the course of that transformation, he could see how the features of the input change. You might expect that maybe the background 167 00:07:34,360 --> 00:07:37,692 would turn blue to look like the sky behind an airplane, or you might expect that the ship would grow wings to look more like an airplane. You could conclude from that that the convolution uses the blue sky or uses the wings to recognize airplanes. That's actually not really what happened at all. Each of these panels here shows an animation that you read left to right, top to bottom. Each panel is another step of gradient ascent on the log probability that the input is an airplane according to a convolutional net model, and then we follow the gradient on the input to the image. You're probably used to following the gradient on the parameters of a model. You can use the back propagation algorithm to compute the gradient on the input image using exactly the same procedure that you would use to compute the gradient on the parameters. In this animation of the ship in the upper left, we see five panels that all look basically the same. Gradient descent doesn't seem to have moved the image at all, but by the last panel the network is completely confident that this is an airplane. When you first code up this kind of experiment, especially if you don't know what's going to happen, it feels a little bit like you have a bug in your script and you're just displaying the same image over and over again. The first time I did it, I couldn't believe it was happening, and I had to open up the images in NumPy, and take the difference of them, and make sure that there was actually a non-zero difference in there, but there is. I show several different animations here of a ship, a car, a cat, and a truck. The only one where I actually see any change at all is the image of the cat. The color of the cat's face changes a little bit, and maybe it becomes a little bit more like the color of a metal airplane. Other than that, I don't see any changes in any of these animations, and I don't see anything very suggestive of an airplane. So gradient descent, rather than turning the input into an example of an airplane, has found an image that fools the network into thinking that the input is an airplane. And if we were malicious attackers we didn't even have to work very hard to figure out how to fool the network. We just asked the network to give us an image of an airplane, and it gave us something that fools it into thinking that the input is an airplane. When Christian first published this work, a lot of articles came out with titles like, The Flaw Looking At Every Deep Neural Network, or Deep Learning has Deep Flaws. It's important to remember that these vulnerabilities apply to essentially every machine learning algorithm that we've studied so far. Some of them like RBF networks and partisan density estimators are able to resist this effect somewhat, but even very simple machine learning algorithms are highly vulnerable to adversarial examples. In this image, I show an animation of what happens when we attack a linear model, so it's not a deep algorithm at all. It's just a shallow softmax model. You multiply by a matrix, you add a vector of bias terms, you apply the softmax function, and you've got your probability distribution over the 10 MNIST classes. At the upper left, I start with an image of a nine, and then as we move left to right, top to bottom, I gradually transform it to be a zero. Where I've drawn the yellow box, the model assigns high probability to it being a zero. I forget exactly what my threshold was for high probability, but I think it was around 0.9 or so. Then as we move to the second row, I transform it into a one, and the second yellow box indicates where we've successfully fooled the model into thinking it's a one with high probability. And then as you read the rest of the yellow boxes left to right, top to bottom, we go through the twos, threes, fours, and so on, until finally at the lower right we have a nine that has a yellow box around it, and it actually looks like a nine, but in this case the only reason it actually looks like a nine is that we started the whole process with a nine. We successfully swept through all 10 classes of MNIST without substantially changing the image of the digit in any way that would interfere with human recognition. This linear model was actually extremely easy to fool. Besides linear models, we've also seen that we can fool many different kinds of linear models including logistic regression and SVMs. We've also found that we can fool decision trees, and to a lesser extent, nearest neighbors classifiers. We wanted to explain exactly why this happens. Back in about 2014, after we'd published the original paper where we'd said that these problems exist, we were trying to figure out why they happen. When we wrote our first paper, we thought that basically this is a form of overfitting, that you have a very complicated deep neural network, it learns to fit the training set, its behavior on the test set is somewhat undefined, and then it makes random mistakes that an attacker can exploit. Let's walk through what that story looks like somewhat concretely. I have here a training set of three blue X's and three green O's. We want to make a classifier that can recognize X's and recognize O's. We have a very complicated classifier that can easily fit the training set, so we represent everywhere it believes X's should be with blobs of blue color, and I've drawn a blob of blue around all of the training set X's, so it correctly classifies the training set. It also has a blob of green mass showing where the O's are, and it successfully fits all of the green training set O's, but then because this is a very complicated function and it has just way more parameters than it actually needs to represent the training task, it throws little blobs of probability mass around the rest of space randomly. On the left there's a blob of green space that's kind of near the training set X's, and I've drawn a red X there to show that maybe this would be an adversarial example where we expect the classification to be X, but the model assigns O. On the right, I've shown that there's a red O where we have another adversarial example. We're very near the other O's. We might expect the model to assign this class to be an O, and yet because it's drawn blue mass there it's actually assigning it to be an X. If overfitting is really the story then each adversarial example is more or less the result of bad luck and also more or less unique. If we fit the model again or we fit a slightly different model we would expect to make different random mistakes on this points that are off the training set, but that was actually not what we found at all. We found that many different models would misclassify the same adversarial examples, and they would assign the same class to them. We also found that if we took the difference between an original example and an adversarial example then we had a direction in input space and we could add that same offset vector to any clean example, and we would almost always get an adversarial example as a result. So we started to realize that there was systematic effect going on here, not just a random effect. That led us to another idea which is that adversarial examples might actually be more like underfitting rather than overfitting. They might actually come from the model being too linear. Here I draw the same task again where we have the same manifold of O's and the same line of X's, and this time I fit a linear model to the data set rather than fitting a high capacity, non-linear model to it. We see that we get a dividing hyperplane running in between the two classes. This hyperplane doesn't really capture the true structure of the classes. The O's are clearly arranged in a C-shaped manifold. If we keep walking past the end of the O's, we've crossed the decision boundary and we've drawn a red O where even though we're very near the decision boundary and near other O's we believe that it is now an X. Similarly we can take steps that go from near X's to just over the line that are classified as O's. Another thing that's somewhat unusual about this plot is that if we look at the lower left or upper right corners these corners are very confidently classified as being X's on the lower left or O's on the upper right even though we've never seen any data over there at all. The linear model family forces the model to have very high confidence in these regions that are very far from the decision boundary. We've seen that linear models can actually assign really unusual confidence as you move very far from the decision boundary, even if there isn't any data there, but are deep neural networks actually anything like linear models? Could linear models actually explain anything about how it is that deep neural nets fail? It turns out that modern deep neural nets are actually very piecewise linear, so rather than being a single linear function they are piecewise linear with maybe not that many linear pieces. If we use rectified linear units then the mapping from the input image to the output logits is literally a piecewise linear function. By the logits I mean the un-normalized log probabilities before we apply the softmax op at the output of the model. There are other neural networks like maxout networks that are also literally piecewise linear. And then there are several that become very close to it. Before rectified linear units became popular most people used to use sigmoid units of one form or another either logistic sigmoid or hyperbolic tangent units. These sigmoidal units have to be carefully tuned, especially at initialization so that you spend most of your time near the center of the sigmoid where the sigmoid is approximately linear. Then finally, the LSTM, a kind of recurrent network that is one of the most popular recurrent networks today, uses addition from one time step to the next in order to accumulate and remember information over time. Addition is a particularly simple form of linearity, so we can see that the interaction from a very distant time step in the past and the present is highly linear within an LSTM. Now to be clear, I'm speaking about the mapping from the input of the model to the output of the model. That's what I'm saying is close to being linear or is piecewise linear with relatively few pieces. The mapping from the parameters of the network to the output of the network is non-linear because the weight matrices at each layer of the network are multiplied together. So we actually get extremely non-linear reactions between parameters and the output. That's what makes training a neural network so difficult. But the mapping from the input to the output is much more linear and predictable, and it means that optimization problems that aim to optimize the input to the model are much easier than optimization problems that aim to optimize the parameters. If we go and look for this happening in practice we can take a convolutional network and trace out a one-dimensional path through its input space. So what we're doing here is we're choosing a clean example. It's an image of a white car on a red background, and we are choosing a direction that will travel through space. We are going to have a coefficient epsilon that we multiply by this direction. When epsilon is negative 30, like at the left end of the plot, we're subtracting off a lot of this unit vector direction. When epsilon is zero, like in the middle of the plot, we're visiting the original image from the data set, and when epsilon is positive 30, like at the right end of the plot, we're adding this direction onto the input. In the panel on the left, I show you an animation where we move from epsilon equals negative 30 as up to epsilon equals positive 30. You read the animation left to right, top to bottom, and everywhere that there's a yellow box the input has correctly recognized as being a car. On the upper left, you see that it looks mostly blue. On the lower right, it's hard to tell what's going on. It's kind of reddish and so on. In the middle row, just after where the yellow boxes end you can see pretty clearly that it's a car on a red background, though the image is small on these slides. What's interesting to look at here is the logits that the model outputs. This is a deep convolutional rectified linear unit network. Because it uses rectified linear units, we know that the output is a piecewise linear function of the input to the model. The main question we're asking by making this plot is how many different pieces does this piecewise linear function have if we look at one particular cross section. You might think that maybe a deep net is going to represent some extremely wiggly complicated function with lots and lots of linear pieces no matter which cross section you look in. Or we might find that it has more or less two pieces for each function we look at. Each of the different curves on this plot is the logits for a different class. We see that out at the tails of the plot that the frog class is the most likely, and the frog class basically looks like a big v-shaped function. The logits for the frog class become very high when epsilon is negative 30 or positive 30, and they drop down and become a little bit negative when epsilon is zero. The car class, listed as automobile here, it's actually high in the middle, and the car is correctly recognized. As we sweep out to very negative epsilon, the logits for the car class do increase, but they don't increase nearly as quickly as the logits for the frog class. So, we've found a direction that's associated with the frog class and as we follow it out to a relatively large perturbation, we find that the model extrapolates linearly and begins to make a very unreasonable prediction that the frog class is extremely likely just because we've moved for a long time in this direction that was locally associated with the frog class being more likely. When we actually go and construct adversarial examples, we need to remember that we're able to get quite a large perturbation without changing the image very much as far as a human being is concerned. So here I show you a handwritten digit three, and I'm going to change it in several different ways, and all of these changes have the same L2 norm perturbation. In the top row, I'm going to change the three into a seven just by looking for the nearest seven in the training set. The difference between those two is this image that looks a little bit like the seven wrapped in some black lines. So here white pixels in the middle image in the perturbation column, the white pixels represent adding something and black pixels represent subtracting something as you move from the left column to the right column. So when we take the three and we apply this perturbation that transforms it into a seven, we can measure the L2 norm of that perturbation. And it turns out to have an L2 norm of 3.96. That gives you kind of a reference for how big these perturbations can be. In the middle row, we apply a perturbation of exactly the same size, but with the direction chosen randomly. In this case we don't actually change the class of the three at all, we just get some random noise that didn't really change the class. A human could still easily read it as being a three. And then finally at the very bottom row, we take the three and we just erase a piece of it with a perturbation of the same norm and we turn it into something that doesn't have any class at all. It's not a three, it's not a seven, it's just a defective input. All of these changes can happen with the same L2 norm perturbation. And actually a lot of the time with adversarial examples, you make perturbations that have an even larger L2 norm. What's going on is that there are several different pixels in the image, and so small changes to individual pixels can add up to relatively large vectors. For larger datasets like ImageNet, where there's even more pixels, you can make very small changes to each pixel that travel very far in vector space as measured by the L2 norm. That means that you can actually make changes that are almost imperceptible but actually move you really far and get a large dot product with the coefficients of the linear function that the model represents. It also means that when we're constructing adversarial examples, we need to make sure that the adversarial example procedure isn't able to do what happened in the top row of this slide here. So in the top row of this slide, we took the three and we actually just changed it into a seven. So when the model says that the image in the upper right is a seven, it's not a mistake. We actually just changed the input class. When we build adversarial examples, we want to make sure that we're measuring real mistakes. If we're experimenters studying how easy a network is to fool, we want to make sure that we're actually fooling it and not just changing the input class. And if we're an attacker, we actually want to make sure that we're causing misbehavior in the system. To do that, when we build adversarial examples, we use the maxnorm to constrain the perturbation. Basically this says that no pixel can change by more than some amount epsilon. So the L2 norm can get really big, but you can't concentrate all the changes for that L2 norm to erase pieces of the digit, like in the bottom row here we erased the top of a three. One very fast way to build an adversarial example is just to take the gradient of the cost that you used to train the network with respect to the input, and then take the sign of that gradient. The sign is essentially enforcing the maxnorm constraint. You're only allowed to change the input by up to epsilon at each pixel, so if you just take the sign it tells you whether you want to add epsilon or subtract epsilon in order to hurt the network. You can view this as taking the observation that the network is more or less linear, as we showed on this slide, and using that to motivate building a first order Taylor series approximation of the neural network's cost. And then subject to that Taylor series approximation, we want to maximize the cost following this maxnorm constraint. And that gives us this technique that we call the fast gradient sign method. If you want to just get your hands dirty and start making adversarial examples really quickly, or if you have an algorithm where you want to train on adversarial examples in the inner loop of learning, this method will make adversarial examples for you very, very quickly. In practice you should also use other methods, like Nicholas Carlini's attack based on multiple steps of the Adam optimizer, to make sure that you have a very strong attack that you bring out when you think you have a model that might be more powerful. A lot of the time people find that they can defeat the fast gradient sign method and think that they've built a successful defense, but then when you bring out a more powerful method that takes longer to evaluate, they find that they can't overcome the more computationally expensive attack. I've told you that adversarial examples happen because the model is very linear. And then I told you that we could use this linearity assumption to build this attack, the fast gradient sign method. This method, when applied to a regular neural network that doesn't have any special defenses, will get over a 99% attack success rate. So that seems to confirm, somewhat, this hypothesis that adversarial examples come from the model being far too linear and extrapolating in linear fashions when it shouldn't. Well we can actually go looking for some more evidence. My friend David Warde-Farley and I built these maps of the decision boundaries of neural networks. And we found that they are consistent with the linearity hypothesis. So the FGSM is that attack method that I described in the previous slide, where we take the sign of the gradient. We'd like to build a map of a two-dimensional cross section of input space and show which classes are assigned to the data at each point. In the grid on the right, each different cell, each little square within the grid, is a map of a CIFAR-10 classifier's decision boundary, with each cell corresponding to a different CIFAR-10 testing sample. On the left I show you a little legend where you can understand what each cell means. The very center of each cell corresponds to the original example from the CIFAR-10 dataset with no modification. As we move left to right in the cell, we're moving in the direction of the fast gradient sign method attack. So just the sign of the gradient. As we move up and down within the cell, we're moving in a random direction that's orthogonal to the fast gradient sign method direction. So we get to see a cross section, a 2D cross section of CIFAR-10 decision space. At each pixel within this map, we plot a color that tells us which class is assigned there. We use white pixels to indicate that the correct class was chosen, and then we used different colors to represent all of the other incorrect classes. You can see that in nearly all of the grid cells on the right, roughly the left half of the image is white. So roughly the left half of the image has been correctly classified. As we move to the right, we see that there is usually a different color on the right half. And the boundaries between these regions are approximately linear. What's going on here is that the fast gradient sign method has identified a direction where if we get a large dot product with that direction we can get an adversarial example. And from this we can see that adversarial examples live more or less in linear subspaces. When we first discovered adversarial examples, we thought that they might live in little tiny pockets. In the first paper we actually speculated that maybe they're a little bit like the rational numbers, hiding out finely tiled among the real numbers, with nearly every real number being near a rational number. We thought that because we were able to find an adversarial example corresponding to every clean example that we loaded into the network. After doing this further analysis, we found that what's happening is that every real example is near one of these linear decision boundaries where you cross over into an adversarial subspace. And once you're in that adversarial subspace, all the other points nearby are also adversarial examples that will be misclassified. This has security implications because it means you only need to get the direction right. You don't need to find an exact coordinate in space. You just need to find a direction that has a large dot product with the sign of the gradient. And once you move more or less approximately in that direction, you can fool the model. We also made another cross section where after using the left-right axis as the fast gradient sign method, we looked for a second direction that has high dot product with the gradient so we could make both axes adversarial. And in this case you see that we get linear decision boundaries. They're now oriented diagonally rather than vertically, but you can see that there's actually this two-dimensional subspace of adversarial examples that we can cross into. Finally it's important to remember that adversarial examples are not noise. You can add a lot of noise to an adversarial example and it will stay adversarial. You can add a lot of noise to a clean example and it will stay clean. Here we make random cross sections where both axes are randomly chosen directions. And you see that on CIFAR-10, most of the cells are completely white, meaning that they're correctly classified to start with, and when you add noise they stay correctly classified. We also see that the model makes some mistakes because this is the test set. And generally if a test example starts out misclassified, adding the noise doesn't change it. There are a few exceptions where, if you look in the third row, third column, noise actually can make the model misclassify the example for especially large noise values. And there's even some where, in the top row there's one example you can see where the model is misclassifying the test example to start with but then noise can change it to be correctly classified. For the most part, noise has very little effect on the classification decision compared to adversarial examples. What's going on here is that in high dimensional spaces, if you choose some reference vector and then you choose a random vector in that high dimensional space, the random vector will, on average, have zero dot product with the reference vector. So if you think about making a first order Taylor series approximation of your cost, and thinking about how your Taylor series approximation predicts that random vectors will change your cost. You see that random vectors on average have no effect on the cost. But adversarial examples are chosen to maximize it. In these plots we looked in two dimensions. More recently, Florian Tramer here at Stanford got interested in finding out just how many dimensions there are to these subspaces where the adversarial examples lie in a thick contiguous region. And we came up with an algorithm together where you actually look for several different orthogonal vectors that all have a large dot product with the gradient. By looking in several different orthogonal directions simultaneously, we can map out this kind of polytope where many different adversarial examples live. We found out that this adversarial region has on average about 25 dimensions. If you look at different examples you'll find different numbers of adversarial dimensions. But on average on MNIST we found it was about 25. So what's interesting here is the dimensionality actually tells you something about how likely you are to find an adversarial example by generating random noise. If every direction were adversarial, then any change would cause a misclassification. If most of the directions were adversarial, then random directions would end up being adversarial just by accident most of the time. And then if there was only one adversarial direction, you'd almost never find that direction just by adding random noise. When there's 25 you have a chance of doing it sometimes. Another interesting thing is that different models will often misclassify the same adversarial examples. The subspace dimensionality of the adversarial subspace relates to that transfer property. The larger the dimensionality of the subspace, the more likely it is that the subspaces for two models will intersect. So if you have two different models that have a very large adversarial subspace, you know that you can probably transfer adversarial examples from one to the other. But if the adversarial subspace is very small, then unless there's some kind of really systematic effect forcing them to share exactly the same subspace, it seems less likely that you'll be able to transfer examples just due to the subspaces randomly aligning. A lot of the time in the adversarial example research community, we refer back to the story of Clever Hans. This comes from an essay by Bob Sturm called Clever Hans, Clever Algorithms. Because Clever Hans is a pretty good metaphor for what's happening with machine learning algorithms. So Clever Hans was a horse that lived in the early 1900s. His owner trained him to do arithmetic problems. So you could ask him, "Clever Hans, "what's two plus one?" And he would answer by tapping his hoof. And after the third tap, everybody would start cheering and clapping and looking excited because he'd actually done an arithmetic problem. Well it turned out that he hadn't actually learned to do arithmetic. But it was actually pretty hard to figure out what was going on. His owner was not trying to defraud anybody, his owner actually believed he could do arithmetic. And presumably Clever Hans himself was not trying to trick anybody. But eventually a psychologist examined him and found that if he was put in a room alone without an audience, and the person asking the questions wore a mask, he couldn't figure out when to stop tapping. You'd ask him, "Clever Hans, "what's one plus one?" And he'd just [knocking] keep staring at your face, waiting for you to give him some sign that he was done tapping. So everybody in this situation was trying to do the right thing. Clever Hans was trying to do whatever it took to get the apple that his owner would give him when he answered an arithmetic problem. His owner did his best to train him correctly with real arithmetic questions and real rewards for correct answers. And what happened was that Clever Hans inadvertently focused on the wrong cue. He found this cue of people's social reactions that could reliably help him solve the problem, but then it didn't generalize to a test set where you intentionally took that cue away. It did generalize to a naturally occurring test set, where he had an audience. So that's more or less what's happening with machine learning algorithms. They've found these very linear patterns that can fit the training data, and these linear patterns even generalize to the test data. They've learned to handle any example that comes from the same distribution as their training data. But then if you shift the distribution that you test them on, if a malicious adversary actually creates examples that are intended to fool them, they're very easily fooled. In fact we find that modern machine learning algorithms are wrong almost everywhere. We tend to think of them as being correct most of the time, because when we run them on naturally occurring inputs they achieve very high accuracy percentages. But if we look instead of as the percentage of samples from an IID test set, if we look at the percentage of the space in RN that is correctly classified, we find that they misclassify almost everything and they behave reasonably only on a very thin manifold surrounding the data that we train them on. In this plot, I show you several different examples of Gaussian noise that I've run through a CIFAR-10 classifier. Everywhere that there is a pink box, the classifier thinks that there is something rather than nothing. I'll come back to what that means in a second. Everywhere that there is a yellow box, one step of the fast gradient sign method was able to persuade the model that it was looking specifically at an airplane. I chose the airplane class because it was the one with the lowest success rate. It had about a 25% success rate. That means an attacker would need four chances to get noise recognized as an airplane on this model. An interesting thing, and appropriate enough given the story of Clever Hans, is that this model found that about 70% of RN was classified as a horse. So I mentioned that this model will say that noise is something rather than nothing. And it's actually kind of important to think about how we evaluate that. If you have a softmax classifier, it has to give you a distribution over the n different classes that you train it on. So there's a few ways that you can argue that the model is telling you that there's something rather than nothing. One is you can say, if it assigns something like 90% to one particular class, that seems to be voting for that class being there. We'd much rather see it give us something like a uniform distribution saying this noise doesn't look like anything in the training set so it's equally likely to be a horse or a car. And that's not what the model does. It'll say, this is very definitely a horse. Another thing that you can do is you can replace the last layer of the model. For example, you can use a sigmoid output for each class. And then the model is actually capable of telling you that any subset of classes is present. It could actually tell you that an image is both a horse and a car. And what we would like it to do for the noise is tell us that none of the classes is present, that all of the sigmoids should have a value of less than 1/2. And 1/2 isn't even particularly a low threshold. We could reasonably expect that all of the sigmoids would be less than 0.01 for such a defective input as this. But what we find instead is that the sigmoids tend to have at least one class present just when we run Gaussian noise of sufficient norm through the model. We've also found that we can do adversarial examples for reinforcement learning. And there's a video for this. I'll upload the slides after the talk and you can follow the link. Unfortunately I wasn't able to get the WiFi to work so I can't show you the video animated. But I can describe basically what's going on from this still here. There's a game Seaquest on Atari where you can train reinforcement learning agents to play that game. And you can take the raw input pixels and you can take the fast gradient sign method or other attacks that use other norms besides the max norm, and compute perturbations that are intended to change the action that the policy would select. So the reinforcement learning policy, you can think of it as just being like a classifier that looks at a frame. And instead of categorizing the input into a particular category, it gives you a softmax distribution over actions to take. So if we just take that and say that the most likely action should have its accuracy be decreased by the adversary. Sorry, to have its probability be decreased by the adversary, you'll get these perturbations of input frames that you can then apply and cause the agent to play different actions than it would have otherwise. And using this you can make the agent play Seaquest very, very badly. It's maybe not the most interesting possible thing. What we'd really like is an environment where there are many different reward functions available for us to study. So for example, if you had a robot that was intended to cook scrambled eggs, and you had a reward function measuring how well it's cooking scrambled eggs, and you had another reward function measuring how well it's cooking chocolate cake, it would be really interesting if we could make adversarial examples that cause the robot to make a chocolate cake when the user intended for it to make scrambled eggs. That's because it's very difficult to succeed at something and it's relatively straightforward to make a system fail. So right now, adversarial examples for RL are very good at showing that we can make RL agents fail. But we haven't yet been able to hijack them and make them do a complicated task that's different from what their owner intended. Seems like it's one of the next steps in adversarial example research though. If we look at high-dimension linear models, we can actually see that a lot of this is very simple and straightforward. Here we have a logistic regression model that classifies sevens and threes. So the whole model can be described just by a weight vector and a single scalar bias term. We don't really need to see the bias term for this exercise. If you look on the left I've plotted the weights that we used to discriminate sevens and threes. The weights should look a little bit like the difference between the average seven and the average three. And then down at the bottom we've taken the sign of the weights. So the gradient for a logistic regression model is going to be proportional to the weights. And then the sign of the weights gives you essentially the sign of the gradient. So we can do the fast gradient sign method to attack this model just by looking at its weights. In the examples in the panel that's the second column from the left we can see clean examples. And then on the right we've just added or subtracted this image of the sign of the weights off of them. To you and me as human observers, the sign of the weights is just like garbage that's in the background, and we more or less filter it out. It doesn't look particularly interesting to us. It doesn't grab our attention. To the logistic regression model this image of the sign of the weights is the most salient thing that could ever appear in the image. When it's positive it looks like the world's most quintessential seven. When it's negative it looks like the world's most quintessential three. And so the model makes its decision almost entirely based on this perturbation we added to the image, rather than on the background. You could also take this same procedure, and my colleague Andrej at OpenAI showed how you can modify the image on ImageNet using this same approach, and turn this goldfish into a daisy. Because ImageNet is much higher dimensional, you don't need to use quite as large of a coefficient on the image of the weights. So we can make a more persuasive fooling attack. You can see that this same image of the weights, when applied to any different input image, will actually reliably cause a misclassification. What's going on is that there are many different classes, and it means that if you choose the weights for any particular class, it's very unlikely that a new test image will belong to that class. So on ImageNet, if we're using the weights for the daisy class, and there are 1,000 different classes, then we have about a 99.9% chance that a test image will not be a daisy. If we then go ahead and add the weights for the daisy class to that image, then we get a daisy, and because that's not the correct class, it's a misclassification. So there's a paper at CVPR this year called Universal Adversarial Perturbations that expands a lot more on this observation that we had going back in 2014. But basically these weight vectors, when applied to many different images, can cause misclassification in all of them. I've spent a lot of time telling you that these linear models are just terrible, and at some point you've probably been hoping I would give you some sort of a control experiment to convince you that there's another model that's not terrible. So it turns out that some quadratic models actually perform really well. In particular a shallow RBF network is able to resist adversarial perturbations very well. Earlier I showed you an animation where I took a nine and I turned it into a zero, one, two, and so on, without really changing its appearance at all. And I was able to fool a linear softmax regression classifier. Here I've got an RBF network where it outputs a separate probability of each class being absent or present, and that probability is given by e to the negative square of the difference between a template image and the input image. And if we actually follow the gradient of this classifier, it does actually turn the image into a zero, a one, a two, a three, and so on, and we can actually recognize those changes. The problem is, this classifier does not get very good accuracy on the training set. It's a shallow model. It's basically just a template matcher. It is literally a template matcher. And if you try to make it more sophisticated by making it deeper, it turns out that the gradient of these RBF units is zero, or very near zero, throughout most of RN. So they're extremely difficult to train, even with batch normalization and methods like that. I haven't managed to train a deep RBF network yet. But I think if somebody comes up with better hyperparameters or a new, more powerful optimization algorithm, it might be possible to solve the adversarial example problem by training a deep RBF network where the model is so nonlinear and has such wide flat areas that the adversary is not able to push the cost uphill just by making small changes to the model's input. One of the things that's the most alarming about adversarial examples is that they generalize from one dataset to another and one model to another. Here I've trained two different models on two different training sets. The training sets are tiny in both cases. It's just MNIST three versus seven classification, and this is really just for the purpose of making a slide. If you train a logistic regression model on the digits shown in the left panel, you get the weights shown on the left in the lower panel. If you train a logistic regression model on the digits shown in the upper right, you get the weights shown on the right in the lower panel. So you've got two different training sets and we learn weight vectors that look very similar to each other. That's just because machine learning algorithms generalize. You want them to learn a function that's somewhat independent of the data that you train them on. It shouldn't matter which particular training examples you choose. If you want to generalize from the training set to the test set, you've also got to expect that different training sets will give you more or less the same result. And that means that because they've learned more or less similar functions, they're vulnerable to similar adversarial examples. An adversary can compute an image that fools one and use it to fool the other. In fact we can actually go ahead and measure the transfer rate between several different machine learning techniques, not just different data sets. Nicolas Papernot and his collaborators have spent a lot of time exploring this transferability effect. And they found that for example, logistic regression makes adversarial examples that transfer to decision trees with 87.4% probability. Wherever you see dark squares in this matrix, that shows that there's a high amount of transfer. That means that it's very possible for an attacker using the model on the left to create adversarial examples for the model on the right. The procedure overall is that, suppose the attacker wants to fool a model that they don't actually have access to. They don't know the architecture that's used to train the model. They may not even know which algorithm is being used. They may not know whether they're attacking a decision tree or a deep neural net. And they also don't know the parameters of the model that they're going to attack. So what they can do is train their own model that they'll use to build the attack. There's two different ways you can train your own model. One is you can label your own training set for the same task that you want to attack. Say that somebody is using an ImageNet classifier, and for whatever reason you don't have access to ImageNet, you can take your own photos and label them, train your own object recognizer. It's going to share adversarial examples with an ImageNet model. The other thing you can do is, say that you can't afford to gather your own training set. What you can do instead is if you can get limited access to the model where you just have the ability to send inputs to the model and observe its outputs, then you can send those inputs, observe the outputs, and use those as your training set. This'll work even if the output that you get from the target model is only the class label that it chooses. A lot of people read this and assume that you need to have access to all the probability values it outputs. But even just the class labels are sufficient. So once you've used one of these two methods, either gather your own training set or observing the outputs of a target model, you can train your own model and then make adversarial examples for your model. Those adversarial examples are very likely to transfer and affect the target model. So you can then go and send those out and fool it, even if you didn't have access to it directly. We've also measured the transferability across different data sets, and for most models we find that they're kind of in an intermediate zone where different data sets will result in a transfer rate of, like, 60% to 80%. There's a few models like SVMs that are very data dependent because SVMs end up focusing on a very small subset of the training data to form their final decision boundary. But most models that we care about are somewhere in the intermediate zone. Now that's just assuming that you rely on the transfer happening naturally. You make an adversarial example and you hope that it will transfer to your target. What if you do something to stack the deck in your favor and improve the odds that you'll get your adversarial examples to transfer? Dawn Song's group at UC Berkeley studied this. They found that if they take an ensemble of different models and they use gradient descent to search for an adversarial example that will fool every member of their ensemble, then it's extremely likely that it will transfer and fool a new machine learning model. So if you have an ensemble of five models, you can get it to the point where there's essentially a 100% chance that you'll fool a sixth model out of the set of models that they compared. They looked at things like ResNets of different depths, VGG, and GoogLeNet. So in the labels for each of the different rows you can see that they made ensembles that lacked each of these different models, and then they would test it on the different target models. So like if you make an ensemble that omits GoogLeNet, you have only about a 5% chance of GoogLeNet correctly classifying the adversarial example you make for that ensemble. If you make an ensemble that omits ResNet-152, in their experiments they found that there was a 0% chance of ResNet-152 resisting that attack. That probably indicates they should have run some more adversarial examples until they found a non-zero success rate, but it does show that the attack is very powerful. And then when you go look into intentionally cause the transfer effect, you can really make it quite strong. A lot of people often ask me if the human brain is vulnerable to adversarial examples. And for this lecture I can't use copyrighted material, but there's some really hilarious things on the Internet if you go looking for, like, the fake CAPTCHA with images of Mark Hamill, you'll find something that my perception system definitely can't handle. So here's another one that's actually published with a license where I was confident I'm allowed to use it. You can look at this image of different circles here, and they appear to be intertwined spirals. But in fact they are concentric circles. The orientation of the edges of the squares is interfering with the edge detectors in your brain, making it look like the circles are spiraling. So you can think of these optical illusions as being adversarial examples in the human brain. What's interesting is that we don't seem to share many adversarial examples in common with machine learning models. Adversarial examples transfer extremely reliably between different machine learning models, especially if you use that ensemble trick that was developed at UC Berkeley. But those adversarial examples don't fool us. It tells us that we must be using a very different algorithm or model family than current convolutional networks. We don't really know what the difference is yet, but it would be very interesting to figure that out. It seems to suggest that studying adversarial examples could tell us how to significantly improve our existing machine learning models. Even if you don't care about having an adversary, we might figure out something or other about how to make machine learning algorithms deal with ambiguity and unexpected inputs more like a human does. If we actually want to go out and do attacks in practice, there's started to be a body of research on this subject. Nicolas Papernot showed that he could use the transfer effect to fool classifiers hosted by MetaMind, Amazon, and Google. So these are all just different machine learning APIs where you can upload a dataset and the API will train the model for you. And then you don't actually know, in most cases, which model is trained for you. You don't have access to its weights or anything like that. So Nicolas would train his own copy of the model using the API, and then build a model on his own personal desktop where he could fool the API hosted model. Later, Berkeley showed you could fool Clarifai in this way. Yeah? - [Man] What did you mean when you said machine having adversarial models don't generally fool us? Because I thought that was part of the point that we generally do machine-generated adversarial models where just a few pixels change. - Oh, so if we look at, for example, like this picture of the panda. To us it looks like a panda. To most machine learning models it looks like a gibbon. And so this change isn't interfering with our brains, but it fools reliably with lots of different machine learning models. I saw somebody actually took this image of the perturbation out of our paper, and they pasted it on their Facebook profile picture to see if it could interfere with Facebook recognizing them. And they said that it did. I don't think that Facebook has a gibbon tag though, so we don't know if they managed to make it think that they were a gibbon. And one of the other things that you can do that's of fairly high practical significance is you can actually fool malware detectors. Catherine Gross at the University of Saarland wrote a paper about this. And there's starting to be a few others. There's a model called MalGAN that actually uses a GAN to generate adversarial examples for malware detectors. Another thing that matters a lot if you are interested in using these attacks in the real world and defending against them in the real world is that a lot of the time you don't actually have access to the digital input to a model. If you're interested in the perception system for a self-driving car or a robot, you probably don't get to actually write to the buffer on the robot itself. You just get to show the robot objects that it can see through a camera lens. So my colleague Alexey Kurakin and Samy Bengio and I wrote a paper where we studied if we can actually fool an object recognition system running on a phone, where it perceives the world through a camera. Our methodology was really straightforward. We just printed out several pictures of adversarial examples. And we found that the object recognition system run by the camera was fooled by them. The system on the camera is actually different from the model that we used to generate the adversarial examples. So we're showing not just transfer across the changes that happen when you use the camera, we're also showing that those transfer across the model that you use. So the attacker could conceivably fool a system that's deployed in a physical agent, even if they don't have access to the model on that agent and even if they can't interface directly with the agent but just subtly modify objects that it can see in its environment. Yeah? - [Man] Why does the, for the low quality camera image noise not affect the adversarial example? Because that's what one would expect. - Yeah, so I think a lot of that comes back to the maps that I showed earlier. If you cross over the boundary into the realm of adversarial examples, they occupy a pretty wide space and they're very densely packed in there. So if you jostle around a little bit, you're not going to recover from the adversarial attack. If the camera noise, somehow or other, was aligned with the negative gradient of the cost, then the camera could take a gradient descent step downhill and rescue you from the uphill step that the adversary took. But probably the camera's taking more or less something that you could model as a random direction. Like clearly when you use the camera more than once it's going to do the same thing each time, but from the point of view of how that direction relates to the image classification problem, it's more or less a random variable that you sample once. And it seems unlikely to align exactly with the normal to this class boundary. There's a lot of different defenses that we'd like to build. And it's a little bit disappointing that I'm mostly here to tell you about attacks. I'd like to tell you how to make your systems more robust. But basically every attack we've tried has failed pretty badly. And in fact, even when people have published that they successfully defended. Well, there's been several papers on arXiv over the last several months. Nicholas Carlini at Berkeley just released a paper where he shows that 10 of those defenses are broken. So this is a really, really hard problem. You can't just make it go away by using traditional regularization techniques. Particular, generative models are not enough to solve the problem. A lot of people say, "Oh the problem that's going on here "is you don't know anything about the distribution "over the input pixels. "If you could just tell "whether the input is realistic or not "then you'd be able to resist it." It turns out that what's going on here is what matters more than getting the right distributions over the inputs x, is getting the right posterior distribution over the class of labels y given inputs x. So just using a generative model is not enough to solve the problem. I think a very carefully designed generative model could possibly do it. Here I show two different modes of a bimodal distribution, and we have two different generative models that try to capture these modes. On the left we have a mixture of two Gaussians. On the right we have a mixture of two Laplacians. You can not really tell the difference visually between the distribution they impose over x, and the difference in the likelihood they assign to the training data is negligible. But the posterior distribution they assign over classes is extremely different. On the left we get a logistic regression classifier that has very high confidence out in the tails of the distribution where there is never any training data. On the right, with the Laplacian distribution, we level off to more or less 50-50. Yeah? [speaker drowned out] The issue is that it's a nonstationary distribution. So if you train it to recognize one kind of adversarial example, then it will become vulnerable to another kind that's designed to fool its detector. That's one of the category of defenses that Nicholas broke in his latest paper that he put out. So here basically the choice of exactly the family of generative model has a big effect in whether the posterior becomes deterministic or uniform, as the model extrapolates. And if we could design a really rich, deep generative model that can generate realistic ImageNet images and also correctly calculate its posterior distribution, then maybe something like this approach could work. But at the moment it's really difficult to get any of those probabilistic calculations correct. And what usually happens is, somewhere or other we make an approximation that causes the posterior distribution to extrapolate very linearly again. It's been a difficult engineering challenge to build generative models that actually capture these distributions accurately. The universal approximator theorem tells us that whatever shape we would like our classification function to have, a neural net that's big enough ought to be able to represent it. It's an open question whether we can train the neural net to have that function, but we know that we should be able to at least give the right shape. So so far we've been getting neural nets that give us these very linear decision functions, and we'd like to get something that looks a little bit more like a step function. So what if we actually just train on adversarial examples? For every input x in the training set, we also say we want you to train x plus an attack to map to the same class label as the original. It turns out that this sort of works. You can generally resist the same kind of attack that you train on. And an important consideration is making sure that you could run your attack very quickly so that you can train on lots of examples. So here the green curve at the very top, the one that doesn't really descend much at all, that's the test set error on adversarial examples if you train on clean examples only. The cyan curve that descends more or less diagonally through the middle of the plot, that's the tester on adversarial examples if you train on adversarial examples. You can see that it does actually reduce significantly. It gets down to a little bit less than 1% error. And the important thing to keep in mind here is that this is fast gradient sign method adversarial examples. It's much harder to resist iterative multi-step adversarial examples where you run an optimizer for a long time searching for a vulnerability. And another thing to keep in mind is that we're testing on the same kind of adversarial examples that we train on. It's harder to generalize from one optimization algorithm to another. By comparison, if you look at what happens on clean examples, the blue curve shows what happens on the clean test set error rate if you train only on clean examples. The red curve shows what happens if you train on both clean and adversarial examples. We see that the red curve actually drops lower than the blue curve. So on this task, training on adversarial examples actually helped us to do the original task better. This is because in the original task we were overfitting. Training on adversarial examples is good regularizer. If you're overfitting it can make you overfit less. If you're underfitting it'll just make you underfit worse. Other kinds of models besides deep neural nets don't benefit as much from adversarial training. So when we started this whole topic of study we thought that deep neural nets might be uniquely vulnerable to adversarial examples. But it turns out that actually they're one of the few models that has a clear path to resisting them. Linear models are just always going to be linear. They don't have much hope of resisting adversarial examples. Deep neural nets can be trained to be nonlinear, and so it seems like there's a path to a solution for them. Even with adversarial training, we still find that we aren't able to make models where if you optimize the input to belong to different classes, you get examples in those classes. Here I start with a CIFAR-10 truck and I turn it into each of the 10 different CIFAR-10 classes. Toward the middle of the plot you can see that the truck has started to look a little bit like a bird. But the bird class is the only one that we've come anywhere near hitting. So even with adversarial training, we're still very far from solving this problem. When we do adversarial training, we rely on having labels for all the examples. We have an image that's labeled as a bird. We make a perturbation that's designed to decrease the probability of the bird class, and we train the model that the image should still be a bird. But what if you don't have labels? It turns out that you can actually train without labels. You ask the model to predict the label of the first image. So if you've trained for a little while and your model isn't perfect yet, it might say, oh, maybe this is a bird, maybe it's a plane. There's some blue sky there, I'm not sure which of these two classes it is. Then we make an adversarial perturbation that's intended to change the guess and we just try to make it say, oh this is a truck, or something like that. It's not whatever you believed it was before. You can then train it to say that the distribution of our classes should still be the same as it was before, but this should still be considered probably a bird or a plane. This technique is called virtual adversarial training, and it was invented by Takeru Miyato. He was my Intern at Google after he did this work. At Google we invited him to come and apply his invention to text classification, because this ability to learn from unlabeled examples makes it possible to do semi-supervised learning where you learn from both unlabeled and labeled examples. And there's quite a lot of unlabeled text in the world. So we were able to bring down the error rate on several different text classification tasks by using this virtual adversarial training. Finally, there's a lot of problems where we'd like to use neural nets to guide optimization procedures. If we want to make a very, very fast car, we could imagine a neural net that looks at the blueprints for a car and predicts how fast it will go. If we could then optimize with respect to the input of the neural net and find the blueprint that it predicts would go the fastest, we could build an incredibly fast car. Unfortunately, what we get right now is not a blueprint for a fast car. We get an adversarial example that the model thinks is going to be very fast. If we're able to solve the adversarial example problem, we'll be able to solve this model-based optimization problem. I like to call model-based optimization the universal engineering machine. If we're able to do model-based optimization, we'll be able to write down a function that describes a thing that doesn't exist yet but we wish that we had. And then gradient descent and neural nets will figure out how to build it for us. We can use that to design new genes and new molecules for medicinal drugs, and new circuits to make GPUs run faster and things like that. So I think overall, solving this problem could unlock a lot of potential technological advances. In conclusion, attacking machine learning models is extremely easy, and defending them is extremely difficult. If you use adversarial training you can get a little bit of a defense, but there's still many caveats associated with that defense. Adversarial training and virtual adversarial training also make it possible to regularize your model and even learn from unlabeled data so you can do better on regular test examples, even if you're not concerned about facing an adversary. And finally, if we're able to solve all of these problems, we'll be able to build a black box model-based optimization system that can solve all kinds of engineering problems that are holding us back in many different fields. I think I have a few minutes left for questions. [audience applauds] [speaker drowned out] Yeah. Oh, so, there's some determinism to the choice of those 50 directions. Oh right, yeah. So repeating the questions. I've said that the same perturbation can fool many different models or the same perturbation can be applied to many different clean examples. I've also said that the subspace of adversarial perturbations is only about 50 dimensional, even if the input dimension is 3,000 dimensional. So how is it that these subspaces intersect? The reason is that the choice of the subspace directions is not completely random. It's generally going to be something like pointing from one class centroid to another class centroid. And if you look at that vector and visualize it as an image, it might not be meaningful to a human just because humans aren't very good at imagining what class centroids look like. And we're really bad at imagining differences between centroids. But there is more or less this systematic effect that causes different models to learn similar linear functions, just because they're trying to solve the same task. [speaker drowned out] Yeah, so the question is, is it possible to identify which layer contributes the most to this issue? One thing is that if you, the last layer is somewhat important. Because, say that you made a feature extractor that's completely robust to adversarial perturbations and can shrink them to be very, very small, and then the last layer is still linear. Then it has all the problems that are typically associated with linear models. And generally you can do adversarial training where you perturb all the different layers, all the hidden layers as well as the input. In this lecture I only described perturbing the input because it seems like that's where most of the benefit comes from. The one thing that you can't do with adversarial training is perturb the very last layer before the softmax, because that linear layer at the end has no way of learning to resist the perturbations. Doing adversarial training at that layer usually just breaks the whole process. But other than that, it seems very problem dependent. There's a paper by Sara Sabour and her collaborators called Adversarial Manipulation of Deep Representations, where they design adversarial examples that are intended to fool different layers of the net. They report some things about, like, how large of a perturbation is needed at the input to get different sizes of perturbation at different hidden layers. I suspect that if you trained the model to resist perturbations at one layer, then another layer would become more vulnerable and it would be like a moving target. [speaker drowned out] Yes, so the question is, how many adversarial examples are needed to improve the misclassification rate? Some of our plots we include learning curves. Or some of our papers we include learning curves, so you can actually see, like in this one here. Every time we do an epoch we've generated the same number of adversarial examples as there are training examples. So every epoch here is 50,000 adversarial examples. You can see that adversarial training is a very data hungry process. You need to make new adversarial examples every time you update the weights. And they're constantly changing in reaction to whatever the model has learned most recently. [speaker drowned out] Oh, the model-based optimization, yeah. Yeah, so the question is just to elaborate further on this problem. So most of the time that we have a machine learning model, it's something like a classifier or a regression model where we give it an input from the test set and it gives us an output. And usually that input is randomly occurring and comes from the same distribution as the training set. We usually just run the model, get its prediction, and then we're done with it. Sometimes we have feedback loops, like for recommender systems. If you work at Netflix and you recommend a movie to a viewer, then they're more likely to watch that movie and then rate it, and then there's going to be more ratings of it in your training set so you'll recommend it to more people in the future. So there's this feedback loop from the output of your model to the input. Most of the time when we build machine vision systems, there's no feedback loop from their output to their input. If we imagine a setting where we start using an optimization algorithm to find inputs that maximize some property of the output, like if we have a model that looks at the blueprints of a car and outputs the expected speed of the car, then we could use gradient ascent to look for the blueprints that correspond to the fastest possible car. Or for example if we're designing a medicine, we could look for the molecular structure that we think is most likely to cure some form of cancer, or the least likely to cause some kind of liver toxicity effect. The problem is that once we start using optimization to look for these inputs that maximize the output of the model, the input is no longer an independent sample from the same distribution as we used at the training set time. The model is now guiding the process that generates the data. So we end up finding essentially adversarial examples. Instead of the model telling us how we can improve the input, what we usually find in practice is that we've got an input that fools the model into thinking that the input corresponds to something great. So we'd find molecules that are very toxic but the model thinks they're very non-toxic. Or we'd find cars that are very slow but the model thinks are very fast. [speaker drowned out] Yeah, so the question is, here the frog class is boosted by going in either the positive or negative adversarial direction. And in some of the other slides, like these maps, you don't get that effect where subtracting epsilon off eventually boosts the adversarial class. Part of what's going on is I think I'm using larger epsilon here. And so you might eventually see that effect if I'd made these maps wider. I made the maps narrower because it's like quadratic time to build a 2D map and it's linear time to build a 1D cross section. So I just didn't afford the GPU time to make the maps quite as wide. I also think that this might just be a weird effect that happened randomly on this one example. It's not something that I remember being used to seeing a lot of the time. Most things that I observe don't happen perfectly consistently. But if they happen, like, 80% of the time then I'll put them in my slide. A lot of what we're doing is trying trying to figure out more or less what's going on, and so if we find that something happens 80% of the time, then I consider it to be the dominant phenomenon that we're trying to explain. And after we've got a better explanation for that then I might start to try to explain some of the weirder things that happen, like the frog happening with negative epsilon. [speaker drowned out] I didn't fully understand the question. It's about the dimensionality of the adversarial? Oh, okay. So the question is, how is the dimension of the adversarial subspace related to the dimension of the input? And my answer is somewhat embarrassing, which is that we've only run this method on two datasets, so we actually don't have a good idea yet. But I think it's something interesting to study. If I remember correctly, my coauthors open sourced our code. So you could probably run it on ImageNet without too much trouble. My contribution to that paper was in the week that I was unemployed between working at OpenAI and working at Google, so I had access to no GPUS and I ran that experiment on my laptop on CPU, so it's only really small datasets. [chuckles] [speaker drowned out] Oh, so the question is, do we end up perturbing clean examples to low confidence adversarial examples? Yeah, in practice we usually find that we can get very high confidence on the output examples. One thing in high dimensions that's a little bit unintuitive is that just getting the sign right on very many of the input pixels is enough to get a really strong response. So the angle between the weight vector matters a lot more than the exact coordinates in high dimensional systems. Does that make enough sense? Yeah, okay. - [Man] So we're actually going to [mumbles]. So if you guys need to leave, that's fine. But let's thank our speaker one more time for getting-- [audience applauds]
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_8_Deep_Learning_Software.txt
- Hello? Okay, it's after 12, so I want to get started. So today, lecture eight, we're going to talk about deep learning software. This is a super exciting topic because it changes a lot every year. But also means it's a lot of work to give this lecture 'cause it changes a lot every year. But as usual, a couple administrative notes before we dive into the material. So as a reminder the project proposals for your course projects were due on Tuesday. So hopefully you all turned that in, and hopefully you all have a somewhat good idea of what kind of projects you want to work on for the class. So we're in the process of assigning TA's to projects based on what the project area is and the expertise of the TA's. So we'll have some more information about that in the next couple days I think. We're also in the process of grading assignment one, so stay tuned and we'll get those grades back to you as soon as we can. Another reminder is that assignment two has been out for a while. That's going to be due next week, a week from today, Thursday. And again, when working on assignment two, remember to stop your Google Cloud instances when you're not working to try to preserve your credits. And another bit of confusion, I just wanted to re-emphasize is that for assignment two you really only need to use GPU instances for the last notebook. For all of the several notebooks it's just in Python and Numpy so you don't need any GPUs for those questions. So again, conserve your credits, only use GPUs when you need them. And the final reminder is that the midterm is coming up. It's kind of hard to believe we're there already, but the midterm will be in class on Tuesday, five nine. So the midterm will be more theoretical. It'll be sort of pen and paper working through different kinds of, slightly more theoretical questions to check your understanding of the material that we've covered so far. And I think we'll probably post at least a short sort of sample of the types of questions to expect. Question? [student's words obscured due to lack of microphone] Oh yeah, question is whether it's open-book, so we're going to say closed note, closed book. So just, Yeah, yeah, so that's what we've done in the past is just closed note, closed book, relatively just like want to check that you understand the intuition behind most of the stuff we've presented. So, a quick recap as a reminder of what we were talking about last time. Last time we talked about fancier optimization algorithms for deep learning models including SGD Momentum, Nesterov, RMSProp and Adam. And we saw that these relatively small tweaks on top of vanilla SGD, are relatively easy to implement but can make your networks converge a bit faster. We also talked about regularization, especially dropout. So remember dropout, you're kind of randomly setting parts of the network to zero during the forward pass, and then you kind of marginalize out over that noise in the back at test time. And we saw that this was kind of a general pattern across many different types of regularization in deep learning, where you might add some kind of noise during training, but then marginalize out that noise at test time so it's not stochastic at test time. We also talked about transfer learning where you can maybe download big networks that were pre-trained on some dataset and then fine tune them for your own problem. And this is one way that you can attack a lot of problems in deep learning, even if you don't have a huge dataset of your own. So today we're going to shift gears a little bit and talk about some of the nuts and bolts about writing software and how the hardware works. And a little bit, diving into a lot of details about what the software looks like that you actually use to train these things in practice. So we'll talk a little bit about CPUs and GPUs and then we'll talk about several of the major deep learning frameworks that are out there in use these days. So first, we've sort of mentioned this off hand a bunch of different times, that computers have CPUs, computers have GPUs. Deep learning uses GPUs, but we weren't really too explicit up to this point about what exactly these things are and why one might be better than another for different tasks. So, who's built a computer before? Just kind of show of hands. So, maybe about a third of you, half of you, somewhere around that ballpark. So this is a shot of my computer at home that I built. And you can see that there's a lot of stuff going on inside the computer, maybe, hopefully you know what most of these parts are. And the CPU is the Central Processing Unit. That's this little chip hidden under this cooling fan right here near the top of the case. And the CPU is actually relatively small piece. It's a relatively small thing inside the case. It's not taking up a lot of space. And the GPUs are these two big monster things that are taking up a gigantic amount of space in the case. They have their own cooling, they're taking a lot of power. They're quite large. So, just in terms of how much power they're using, in terms of how big they are, the GPUs are kind of physically imposing and taking up a lot of space in the case. So the question is what are these things and why are they so important for deep learning? Well, the GPU is called a graphics card, or Graphics Processing Unit. And these were really developed, originally for rendering computer graphics, and especially around games and that sort of thing. So another show of hands, who plays video games at home sometimes, from time to time on their computer? Yeah, so again, maybe about half, good fraction. So for those of you who've played video games before and who've built your own computers, you probably have your own opinions on this debate. [laughs] So this is one of those big debates in computer science. You know, there's like Intel versus AMD, NVIDIA versus AMD for graphics cards. It's up there with Vim versus Emacs for text editor. And pretty much any gamer has their own opinions on which of these two sides they prefer for their own cards. And in deep learning we kind of have mostly picked one side of this fight, and that's NVIDIA. So if you guys have AMD cards, you might be in a little bit more trouble if you want to use those for deep learning. And really, NVIDIA's been pushing a lot for deep learning in the last several years. It's been kind of a large focus of some of their strategy. And they put in a lot effort into engineering sort of good solutions to make their hardware better suited for deep learning. So most people in deep learning when we talk about GPUs, we're pretty much exclusively talking about NVIDIA GPUs. Maybe in the future this'll change a little bit, and there might be new players coming up, but at least for now NVIDIA is pretty dominant. So to give you an idea of like what is the difference between a CPU and a GPU, I've kind of made a little spread sheet here. On the top we have two of the kind of top end Intel consumer CPUs, and on the bottom we have two of NVIDIA's sort of current top end consumer GPUs. And there's a couple general trends to notice here. Both GPUs and CPUs are kind of a general purpose computing machine where they can execute programs and do sort of arbitrary instructions, but they're qualitatively pretty different. So CPUs tend to have just a few cores, for consumer desktop CPUs these days, they might have something like four or six or maybe up to 10 cores. With hyperthreading technology that means they can run, the hardware can physically run, like maybe eight or up to 20 threads concurrently. So the CPU can maybe do 20 things in parallel at once. So that's just not a gigantic number, but those threads for a CPU are pretty powerful. They can actually do a lot of things, they're very fast. Every CPU instruction can actually do quite a lot of stuff. And they can all work pretty independently. For GPUs it's a little bit different. So for GPUs we see that these sort of common top end consumer GPUs have thousands of cores. So the NVIDIA Titan XP which is the current top of the line consumer GPU has 3840 cores. So that's a crazy number. That's like way more than the 10 cores that you'll get for a similarly priced CPU. The downside of a GPU is that each of those cores, one, it runs at a much slower clock speed. And two they really can't do quite as much. You can't really compare CPU cores and GPU cores apples to apples. The GPU cores can't really operate very independently. They all kind of need to work together and sort of paralyze one task across many cores rather than each core totally doing its own thing. So you can't really compare these numbers directly. But it should give you the sense that due to the large number of cores GPUs can sort of, are really good for parallel things where you need to do a lot of things all at the same time, but those things are all pretty much the same flavor. Another thing to point out between CPUs and GPUs is this idea of memory. Right, so CPUs have some cache on the CPU, but that's relatively small and the majority of the memory for your CPU is pulling from your system memory, the RAM, which will maybe be like eight, 12, 16, 32 gigabytes of RAM on a typical consumer desktop these days. Whereas GPUs actually have their own RAM built into the chip. There's a pretty large bottleneck communicating between the RAM in your system and the GPU, so the GPUs typically have their own relatively large block of memory within the card itself. And for the Titan XP, which again is maybe the current top of the line consumer card, this thing has 12 gigabytes of memory local to the GPU. GPUs also have their own caching system where there are sort of multiple hierarchies of caching between the 12 gigabytes of GPU memory and the actual GPU cores. And that's somewhat similar to the caching hierarchy that you might see in a CPU. So, CPUs are kind of good for general purpose processing. They can do a lot of different things. And GPUs are maybe more specialized for these highly paralyzable algorithms. So the prototypical algorithm of something that works really really well and is like perfectly suited to a GPU is matrix multiplication. So remember in matrix multiplication on the left we've got like a matrix composed of a bunch of rows. We multiply that on the right by another matrix composed of a bunch of columns and then this produces another, a final matrix where each element in the output matrix is a dot product between one of the rows and one of the columns of the two input matrices. And these dot products are all independent. Like you could imagine, for this output matrix you could split it up completely and have each of those different elements of the output matrix all being computed in parallel and they all sort of are running the same computation which is taking a dot product of these two vectors. But exactly where they're reading that data from is from different places in the two input matrices. So you could imagine that for a GPU you can just like blast this out and have all of this elements of the output matrix all computed in parallel and that could make this thing computer super super fast on GPU. So that's kind of the prototypical type of problem that like where a GPU is really well suited, where a CPU might have to go in and step through sequentially and compute each of these elements one by one. That picture is a little bit of a caricature because CPUs these days have multiple cores, they can do vectorized instructions as well, but still, for these like massively parallel problems GPUs tend to have much better throughput. Especially when these matrices get really really big. And by the way, convolution is kind of the same kind of story. Where you know in convolution we have this input tensor, we have this weight tensor and then every point in the output tensor after a convolution is again some inner product between some part of the weights and some part of the input. And you can imagine that a GPU could really paralyze this computation, split it all up across the many cores and compute it very quickly. So that's kind of the general flavor of the types of problems where GPUs give you a huge speed advantage over CPUs. So you can actually write programs that run directly on GPUs. So NVIDIA has this CUDA abstraction that lets you write code that kind of looks like C, but executes directly on the GPUs. But CUDA code is really really tricky. It's actually really tough to write CUDA code that's performant and actually squeezes all the juice out of these GPUs. You have to be very careful managing the memory hierarchy and making sure you don't have cache misses and branch mispredictions and all that sort of stuff. So it's actually really really hard to write performant CUDA code on your own. So as a result NVIDIA has released a lot of libraries that implement common computational primitives that are very very highly optimized for GPUs. So for example NVIDIA has a cuBLAS library that implements different kinds of matrix multiplications and different matrix operations that are super optimized, run really well on GPU, get very close to sort of theoretical peak hardware utilization. Similarly they have a cuDNN library which implements things like convolution, forward and backward passes, batch normalization, recurrent networks, all these kinds of computational primitives that we need in deep learning. NVIDIA has gone in there and released their own binaries that compute these primitives very efficiently on NVIDIA hardware. So in practice, you tend not to end up writing your own CUDA code for deep learning. You typically are just mostly calling into existing code that other people have written. Much of which is the stuff which has been heavily optimized by NVIDIA already. There's another sort of language called OpenCL which is a bit more general. Runs on more than just NVIDIA GPUs, can run on AMD hardware, can run on CPUs, but OpenCL, nobody's really spent a really large amount of effort and energy trying to get optimized deep learning primitives for OpenCL, so it tends to be a lot less performant the super optimized versions in CUDA. So maybe in the future we might see a bit of a more open standard and we might see this across many different more types of platforms, but at least for now, NVIDIA's kind of the main game in town for deep learning. So you can check, there's a lot of different resources for learning about how you can do GPU programming yourself. It's kind of fun. It's sort of a different paradigm of writing code because it's this massively parallel architecture, but that's a bit beyond the scope of this course. And again, you don't really need to write your own CUDA code much in practice for deep learning. And in fact, I've never written my own CUDA code for any research project, so, but it is kind of useful to know like how it works and what are the basic ideas even if you're not writing it yourself. So if you want to look at kind of CPU GPU performance in practice, I did some benchmarks last summer comparing a decent Intel CPU against a bunch of different GPUs that were sort of near top of the line at that time. And these were my own benchmarks that you can find more details on GitHub, but my findings were that for things like VGG 16 and 19, ResNets, various ResNets, then you typically see something like a 65 to 75 times speed up when running the exact same computation on a top of the line GPU, in this case a Pascal Titan X, versus a top of the line, well, not quite top of the line CPU, which in this case was an Intel E5 processor. Although, I'd like to make one sort of caveat here is that you always need to be super careful whenever you're reading any kind of benchmarks about deep learning, because it's super easy to be unfair between different things. And you kind of need to know a lot of the details about what exactly is being benchmarked in order to know whether or not the comparison is fair. So in this case I'll come right out and tell you that probably this comparison is a little bit unfair to CPU because I didn't spend a lot of effort trying to squeeze the maximal performance out of CPUs. I probably could have tuned the blast libraries better for the CPU performance. And I probably could have gotten these numbers a bit better. This was sort of out of the box performance between just installing Torch, running it on a CPU, just installing Torch running it on a GPU. So this is kind of out of the box performance, but it's not really like peak, possible, theoretical throughput on the CPU. But that being said, I think there are still pretty substantial speed ups to be had here. Another kind of interesting outcome from this benchmarking was comparing these optimized cuDNN libraries from NVIDIA for convolution and whatnot versus sort of more naive CUDA that had been hand written out in the open source community. And you can see that if you compare the same networks on the same hardware with the same deep learning framework and the only difference is swapping out these cuDNN versus sort of hand written, less optimized CUDA you can see something like nearly a three X speed up across the board when you switch from the relatively simple CUDA to these like super optimized cuDNN implementations. So in general, whenever you're writing code on GPU, you should probably almost always like just make sure you're using cuDNN because you're leaving probably a three X performance boost on the table if you're not calling into cuDNN for your stuff. So another problem that comes up in practice, when you're training these things is that you know, your model is maybe sitting on the GPU, the weights of the model are in that 12 gigabytes of local storage on the GPU, but your big dataset is sitting over on the right on a hard drive or an SSD or something like that. So if you're not careful you can actually bottleneck your training by just trying to read the data off the disk. 'Cause the GPU is super fast, it can compute forward and backward quite fast, but if you're reading sequentially off a spinning disk, you can actually bottleneck your training quite, and that can be really bad and slow you down. So some solutions here are that like you know if your dataset's really small, sometimes you might just read the whole dataset into RAM. Or even if your dataset isn't so small, but you have a giant server with a ton of RAM, you might do that anyway. You can also make sure you're using an SSD instead of a hard drive, that can help a lot with read throughput. Another common strategy is to use multiple threads on the CPU that are pre-fetching data off RAM or off disk, buffering it in memory, in RAM so that then you can continue feeding that buffer data down to the GPU with good performance. This is a little bit painful to set up, but again like, these GPU's are so fast that if you're not really careful with trying to feed them data as quickly as possible, just reading the data can sometimes bottleneck the whole training process. So that's something to be aware of. So that's kind of the brief introduction to like sort of GPU CPU hardware in practice when it comes to deep learning. And then I wanted to switch gears a little bit and talk about the software side of things. The various deep learning frameworks that people are using in practice. But I guess before I move on, is there any sort of questions about CPU GPU? Yeah, question? [student's words obscured due to lack of microphone] Yeah, so the question is what can you sort of, what can you do mechanically when you're coding to avoid these problems? Probably the biggest thing you can do in software is set up sort of pre-fetching on the CPU. Like you couldn't like, sort of a naive thing would be you have this sequential process where you first read data off disk, wait for the data, wait for the minibatch to be read, then feed the minibatch to the GPU, then go forward and backward on the GPU, then read another minibatch and sort of do this all in sequence. And if you actually have multiple, like instead you might have CPU threads running in the background that are fetching data off the disk such that while the, you can sort of interleave all of these things. Like the GPU is computing, the CPU background threads are feeding data off disk and your main thread is kind of waiting for these things to, just doing a bit of synchronization between these things so they're all happening in parallel. And thankfully if you're using some of these deep learning frameworks that we're about to talk about, then some of this work has already been done for you 'cause it's a little bit painful. So the landscape of deep learning frameworks is super fast moving. So last year when I gave this lecture I talked mostly about Caffe, Torch, Theano and TensorFlow. And when I last gave this talk, again more than a year ago, TensorFlow was relatively new. It had not seen super widespread adoption yet at that time. But now I think in the last year TensorFlow has gotten much more popular. It's probably the main framework of choice for many people. So that's a big change. We've also seen a ton of new frameworks sort of popping up like mushrooms in the last year. So in particular Caffe2 and PyTorch are new frameworks from Facebook that I think are pretty interesting. There's also a ton of other frameworks. Paddle, Baidu has Paddle, Microsoft has CNTK, Amazon is mostly using MXNet and there's a ton of other frameworks as well, but I'm less familiar with, and really don't have time to get into. But one interesting thing to point out from this picture is that kind of the first generation of deep learning frameworks that really saw wide adoption were built in academia. So Caffe was from Berkeley, Torch was developed originally NYU and also in collaboration with Facebook. And Theana was mostly build at the University of Montreal. But these kind of next generation deep learning frameworks all originated in industry. So Caffe2 is from Facebook, PyTorch is from Facebook. TensorFlow is from Google. So it's kind of an interesting shift that we've seen in the landscape over the last couple of years is that these ideas have really moved a lot from academia into industry. And now industry is kind of giving us these big powerful nice frameworks to work with. So today I wanted to mostly talk about PyTorch and TensorFlow 'cause I personally think that those are probably the ones you should be focusing on for a lot of research type problems these days. I'll also talk a bit about Caffe and Caffe2. But probably a little bit less emphasis on those. And before we move any farther, I thought I should make my own biases a little bit more explicit. So I have mostly, I've worked with Torch mostly for the last several years. And I've used it quite a lot, I like it a lot. And then in the last year I've mostly switched to PyTorch as my main research framework. So I have a little bit less experience with some of these others, especially TensorFlow, but I'll still try to do my best to give you a fair picture and a decent overview of these things. So, remember that in the last several lectures we've hammered this idea of computational graphs in sort of over and over. That whenever you're doing deep learning, you want to think about building some computational graph that computes whatever function that you want to compute. So in the case of a linear classifier you'll combine your data X and your weights W with a matrix multiply. You'll do some kind of hinge loss to maybe have, compute your loss. You'll have some regularization term and you imagine stitching together all these different operations into some graph structure. Remember that these graph structures can get pretty complex in the case of a big neural net, now there's many different layers, many different activations. Many different weights spread all around in a pretty complex graph. And as you move to things like neural turing machines then you can get these really crazy computational graphs that you can't even really draw because they're so big and messy. So the point of deep learning frameworks is really, there's really kind of three main reasons why you might want to use one of these deep learning frameworks rather than just writing your own code. So the first would be that these frameworks enable you to easily build and work with these big hairy computational graphs without kind of worrying about a lot of those bookkeeping details yourself. Another major idea is that, whenever we're working in deep learning we always need to compute gradients. We're always computing some loss, we're always computer gradient of our weight with respect to the loss. And we'd like to make this automatically computing gradient, you don't want to have to write that code yourself. You want that framework to handle all these back propagation details for you so you can just think about writing down the forward pass of your network and have the backward pass sort of come out for free without any additional work. And finally you want all this stuff to run efficiently on GPUs so you don't have to worry too much about these low level hardware details about cuBLAS and cuDNN and CUDA and moving data between the CPU and GPU memory. You kind of want all those messy details to be taken care of for you. So those are kind of some of the major reasons why you might choose to use frameworks rather than writing your own stuff from scratch. So as kind of a concrete example of a computational graph we can maybe write down this super simple thing. Where we have three inputs, X, Y, and Z. We're going to combine X and Y to produce A. Then we're going to combine A and Z to produce B and then finally we're going to do some maybe summing out operation on B to give some scaler final result C. So you've probably written enough Numpy code at this point to realize that it's super easy to write down, to implement this computational graph, or rather to implement this bit of computation in Numpy, right? You can just kind of write down in Numpy that you want to generate some random data, you want to multiply two things, you want to add two things, you want to sum out a couple things. And it's really easy to do this in Numpy. But then the question is like suppose that we want to compute the gradient of C with respect to X, Y, and Z. So, if you're working in Numpy, you kind of need to write out this backward pass yourself. And you've gotten a lot of practice with this on the homeworks, but it can be kind of a pain and a little bit annoying and messy once you get to really big complicated things. The other problem with Numpy is that it doesn't run on the GPU. So Numpy is definitely CPU only. And you're never going to be able to experience or take advantage of these GPU accelerated speedups if you're stuck working in Numpy. And it's, again, it's a pain to have to compute your own gradients in all these situations. So, kind of the goal of most deep learning frameworks these days is to let you write code in the forward pass that looks very similar to Numpy, but lets you run it on the GPU and lets you automatically compute gradients. And that's kind of the big picture goal of most of these frameworks. So if you imagine looking at, if we look at an example in TensorFlow of the exact same computational graph, we now see that in this forward pass, you write this code that ends up looking very very similar to the Numpy forward pass where you're kind of doing these multiplication and these addition operations. But now TensorFlow has this magic line that just computes all the gradients for you. So now you don't have go in and write your own backward pass and that's much more convenient. The other nice thing about TensorFlow is you can really just, like with one line you can switch all this computation between CPU and GPU. So here, if you just add this with statement before you're doing this forward pass, you just can explicitly tell the framework, hey I want to run this code on the CPU. But now if we just change that with statement a little bit with just with a one character change in this case, changing that C to a G, now the code runs on GPU. And now in this little code snippet, we've solved these two problems. We're running our code on the GPU and we're having the framework compute all the gradients for us, so that's really nice. And PyTorch kind looks almost exactly the same. So again, in PyTorch you kind of write down, you define some variables, you have some forward pass and the forward pass again looks very similar to like, in this case identical to the Numpy code. And then again, you can just use PyTorch to compute gradients, all your gradients with just one line. And now in PyTorch again, it's really easy to switch to GPU, you just need to cast all your stuff to the CUDA data type before you rung your computation and now everything runs transparently on the GPU for you. So if you kind of just look at these three examples, these three snippets of code side by side, the Numpy, the TensorFlow and the PyTorch you see that the TensorFlow and the PyTorch code in the forward pass looks almost exactly like Numpy which is great 'cause Numpy has a beautiful API, it's really easy to work with. But we can compute gradients automatically and we can run the GPU automatically. So after that kind of introduction, I wanted to dive in and talk in a little bit more detail about kind of what's going on inside this TensorFlow example. So as a running example throughout the rest of the lecture, I'm going to use the training a two-layer fully connected ReLU network on random data as kind of a running example throughout the rest of the examples here. And we're going to train this thing with an L2 Euclidean loss on random data. So this is kind of a silly network, it's not really doing anything useful, but it does give you the, it's relatively small, self contained, the code fits on the slide without being too small, and it lets you demonstrate kind of a lot of the useful ideas inside these frameworks. So here on the right, oh, and then another note, I'm kind of assuming that Numpy and TensorFlow have already been imported in all these code snippets. So in TensorFlow you would typically divide your computation into two major stages. First, we're going to write some code that defines our computational graph, and that's this red code up in the top half. And then after you define your graph, you're going to run the graph over and over again and actually feed data into the graph to perform whatever computation you want it to perform. So this is the really, this is kind of the big common pattern in TensorFlow. You'll first have a bunch of code that builds the graph and then you'll go and run the graph and reuse it many many times. So if you kind of dive into the code of building the graph in this case. Up at the top you see that we're defining this X, Y, w1 and w2, and we're creating these tf.placeholder objects. So these are going to be input nodes to the graph. These are going to be sort of entry points to the graph where when we run the graph, we're going to feed in data and put them in through these input slots in our computational graph. So this is not actually like allocating any memory right now. We're just sort of setting up these input slots to the graph. Then we're going to use those input slots which are now kind of like these symbolic variables and we're going to perform different TensorFlow operations on these symbolic variables in order to set up what computation we want to run on those variables. So in this case we're doing a matrix multiplication between X and w1, we're doing some tf.maximum to do a ReLU nonlinearity and then we're doing another matrix multiplication to compute our output predictions. And then we're again using a sort of basic Tensor operations to compute our Euclidean distance, our L2 loss between our prediction and the target Y. Another thing to point out here is that these lines of code are not actually computing anything. There's no data in the system right now. We're just building up this computational graph data structure telling TensorFlow which operations we want to eventually run once we put in real data. So this is just building the graph, this is not actually doing anything. Then we have this magical line where after we've computed our loss with these symbolic operations, then we can just ask TensorFlow to compute the gradient of the loss with respect to w1 and w2 in this one magical, beautiful line. And this avoids you writing all your own backprop code that you had to do in the assignments. But again there's no actual computation happening here. This is just sort of adding extra operations to the computational graph where now the computational graph has these additional operations which will end up computing these gradients for you. So now at this point we've computed our computational graph, we have this big graph in this graph data structure in memory that knows what operations we want to perform to compute the loss in gradients. And now we enter a TensorFlow session to actually run this graph and feed it with data. So then, once we've entered the session, then we actually need to construct some concrete values that will be fed to the graph. So TensorFlow just expects to receive data from Numpy arrays in most cases. So here we're just creating concrete actual values for X, Y, w1 and w2 using Numpy and then storing these in some dictionary. And now here is where we're actually running the graph. So you can see that we're calling a session.run to actually execute some part of the graph. The first argument loss, tells us which part of the graph do we actually want as output. And that, so we actually want the graph, in this case we need to tell it that we actually want to compute loss and grad1 and grad w2 and we need to pass in with this feed dict parameter the actual concrete values that will be fed to the graph. And then after, in this one line, it's going and running the graph and then computing those values for loss grad1 to grad w2 and then returning the actual concrete values for those in Numpy arrays again. So now after you unpack this output in the second line, you get Numpy arrays, or you get Numpy arrays with the loss and the gradients. So then you can go and do whatever you want with these values. So then, this has only run sort of one forward and backward pass through our graph, and it only takes a couple extra lines if we actually want to train the network. So here we're, now we're running the graph many times in a loop so we're doing a four loop and in each iteration of the loop, we're calling session.run asking it to compute the loss and the gradients. And now we're doing a manual gradient discent step using those computed gradients to now update our current values of the weights. So if you actually run this code and plot the losses, then you'll see that the loss goes down and the network is training and this is working pretty well. So this is kind of like a super bare bones example of training a fully connected network in TensorFlow. But there's a problem here. So here, remember that on the forward pass, every time we execute this graph, we're actually feeding in the weights. We have the weights as Numpy arrays and we're explicitly feeding them into the graph. And now when the graph finishes executing it's going to give us these gradients. And remember the gradients are the same size as the weights. So this means that every time we're running the graph here, we're copying the weights from Numpy arrays into TensorFlow then getting the gradients and then copying the gradients from TensorFlow back out to Numpy arrays. So if you're just running on CPU, this is maybe not a huge deal, but remember we talked about CPU GPU bottleneck and how it's very expensive actually to copy data between CPU memory and GPU memory. So if your network is very large and your weights and gradients were very big, then doing something like this would be super expensive and super slow because we'd be copying all kinds of data back and forth between the CPU and the GPU at every time step. So that's bad, we don't want to do that. We need to fix that. So, obviously TensorFlow has some solution to this. And the idea is that now we want our weights, w1 and w2, rather than being placeholders where we're going to, where we expect to feed them in to the network on every forward pass, instead we define them as variables. So a variable is something is a value that lives inside the computational graph and it's going to persist inside the computational graph across different times when you run the same graph. So now instead of declaring these w1 and w2 as placeholders, instead we just construct them as variables. But now since they live inside the graph, we also need to tell TensorFlow how they should be initialized, right? Because in the previous case we were feeding in their values from outside the graph, so we initialized them in Numpy, but now because these things live inside the graph, TensorFlow is responsible for initializing them. So we need to pass in a tf.randomnormal operation, which again is not actually initializing them when we run this line, this is just telling TensorFlow how we want them to be initialized. So it's a little bit of confusing misdirection going on here. And now, remember in the previous example we were actually updating the weights outside of the computational graph. We, in the previous example, we were computing the gradients and then using them to update the weights as Numpy arrays and then feeding in the updated weights at the next time step. But now because we want these weights to live inside the graph, this operation of updating the weights needs to also be an operation inside the computational graph. So now we used this assign function which mutates these variables inside the computational graph and now the mutated value will persist across multiple runs of the same graph. So now when we run this graph and when we train the network, now we need to run the graph once with a little bit of special incantation to tell TensorFlow to set up these variables that are going to live inside the graph. And then once we've done that initialization, now we can run the graph over and over again. And here, we're now only feeding in the data and labels X and Y and the weights are living inside the graph. And here we've asked the network to, we've asked TensorFlow to compute the loss for us. And then you might think that this would train the network, but there's actually a bug here. So, if you actually run this code, and you plot the loss, it doesn't train. So that's bad, it's confusing, like what's going on? We wrote this assign code, we ran the thing, like we computed the loss and the gradients and our loss is flat, what's going on? Any ideas? [student's words obscured due to lack of microphone] Yeah so one hypothesis is that maybe we're accidentally re-initializing the w's every time we call the graph. That's a good hypothesis, that's actually not the problem in this case. [student's words obscured due to lack of microphone] Yeah, so the answer is that we actually need to explicitly tell TensorFlow that we want to run these new w1 and new w2 operations. So we've built up this big computational graph data structure in memory and now when we call run, we only told TensorFlow that we wanted to compute loss. And if you look at the dependencies among these different operations inside the graph, you see that in order to compute loss we don't actually need to perform this update operation. So TensorFlow is smart and it only computes the parts of the graph that are necessary for computing the output that you asked it to compute. So that's kind of a nice thing because it means it's only doing as much work as it needs to, but in situations like this it can be a little bit confusing and lead to behavior that you didn't expect. So the solution in this case is that we actually need to explicitly tell TensorFlow to perform those update operations. So one thing we could do, which is what was suggested is we could add new w1 and new w2 as outputs and just tell TensorFlow that we want to produce these values as outputs. But that's a problem too because the values, those new w1, new w2 values are again these big tensors. So now if we tell TensorFlow we want those as output, we're going to again get this copying behavior between CPU and GPU at ever iteration. So that's bad, we don't want that. So there's a little trick you can do instead. Which is that we add kind of a dummy node to the graph. With these fake data dependencies and we just say that this dummy node updates, has these data dependencies of new w1 and new w2. And now when we actually run the graph, we tell it to compute both the loss and this dummy node. And this dummy node doesn't actually return any value it just returns none, but because of this dependency that we've put into the node it ensures that when we run the updates value, we actually also run these update operations. So, question? [student's words obscured due to lack of microphone] Is there a reason why we didn't put X and Y into the graph? And that it stayed as Numpy. So in this example we're reusing X and Y on every, we're reusing the same X and Y on every iteration. So you're right, we could have just also stuck those in the graph, but in a more realistic scenario, X and Y will be minibatches of data so those will actually change at every iteration and we will want to feed different values for those at every iteration. So in this case, they could have stayed in the graph, but in most cases they will change, so we don't want them to live in the graph. Oh, another question? [student's words obscured due to lack of microphone] Yeah, so we've told it, we had put into TensorFlow that the outputs we want are loss and updates. Updates is not actually a real value. So when updates evaluates it just returns none. But because of this dependency we've told it that updates depends on these assign operations. But these assign operations live inside the computational graph and all live inside GPU memory. So then we're doing these update operations entirely on the GPU and we're no longer copying the updated values back out of the graph. [student's words obscured due to lack of microphone] So the question is does tf.group return none? So this gets into the trickiness of TensorFlow. So tf.group returns some crazy TensorFlow value. It sort of returns some like internal TensorFlow node operation that we need to continue building the graph. But when you execute the graph, and when you tell, inside the session.run, when we told it we want it to compute the concrete value from updates, then that returns none. So whenever you're working with TensorFlow you have this funny indirection between building the graph and the actual output values during building the graph is some funny weird object, and then you actually get a concrete value when you run the graph. So here after you run updates, then the output is none. Does that clear it up a little bit? [student's words obscured due to lack of microphone] So the question is why is loss a value and why is updates none? That's just the way that updates works. So loss is a value when we compute, when we tell TensorFlow we want to run a tensor, then we get the concrete value. Updates is this kind of special other data type that does not return a value, it instead returns none. So it's kind of some TensorFlow magic that's going on there. Maybe we can talk offline if you're still confused. [student's words obscured due to lack of microphone] Yeah, yeah, that behavior is coming from the group method. So now, we kind of have this weird pattern where we wanted to do these different assign operations, we have to use this funny tf.group thing. That's kind of a pain, so thankfully TensorFlow gives you some convenience operations that kind of do that kind of stuff for you. And that's called an optimizer. So here we're using a tf.train.GradientDescentOptimizer and we're telling it what learning rate we want to use. And you can imagine that there's, there's RMSprop, there's all kinds of different optimization algorithms here. And now we call optimizer.minimize of loss and now this is a pretty magical, this is a pretty magical thing, because now this call is aware that these variables w1 and w2 are marked as trainable by default, so then internally, inside this optimizer.minimize it's going in and adding nodes to the graph which will compute gradient of loss with respect to w1 and w2 and then it's also performing that update operation for you and it's doing the grouping operation for you and it's doing the assigns. It's like doing a lot of magical stuff inside there. But then it ends up giving you this magical updates value which, if you dig through the code they're actually using tf.group so it looks very similar internally to what we saw before. And now when we run the graph inside our loop we do the same pattern of telling it to compute loss and updates. And every time we tell the graph to compute updates, then it'll actually go and update the graph. Question? [student's words obscured due to lack of microphone] Yeah, so what is the tf.GlobalVariablesInitializer? So that's initializing w1 and w2 because these are variables which live inside the graph. So we need to, when we saw this, when we create the tf.variable we have this tf.randomnormal which is this initialization so the tf.GlobalVariablesInitializer is causing the tf.randomnormal to actually run and generate concrete values to initialize those variables. [student's words obscured due to lack of microphone] Sorry, what was the question? [student's words obscured due to lack of microphone] So it knows that a placeholder is going to be fed outside of the graph and a variable is something that lives inside the graph. So I don't know all the details about how it decides, what exactly it decides to run with that call. I think you'd need to dig through the code to figure that out, or maybe it's documented somewhere. So but now we've kind of got this, again we've got this full example of training a network in TensorFlow and we're kind of adding bells and whistles to make it a little bit more convenient. So we can also here, in the previous example we were computing the loss explicitly using our own tensor operations, TensorFlow you can always do that, you can use basic tensor operations to compute just about anything you want. But TensorFlow also gives you a bunch of convenience functions that compute these common neural network things for you. So in this case we can use tf.losses.mean_squared_error and it just does the L2 loss for us so we don't have to compute it ourself in terms of basic tensor operations. So another kind of weirdness here is that it was kind of annoying that we had to explicitly define our inputs and define our weights and then like chain them together in the forward pass using a matrix multiply. And in this example we've actually not put biases in the layer because that would be kind of an extra, then we'd have to initialize biases, we'd have to get them in the right shape, we'd have to broadcast the biases against the output of the matrix multiply and you can see that that would kind of be a lot of code. It would be kind of annoying write. And once you get to like convolutions and batch normalizations and other types of layers this kind of basic way of working, of having these variables, having these inputs and outputs and combining them all together with basic computational graph operations could be a little bit unwieldy and it could be really annoying to make sure you initialize the weights with the right shapes and all that sort of stuff. So as a result, there's a bunch of sort of higher level libraries that wrap around TensorFlow and handle some of these details for you. So one example that ships with TensorFlow, is this tf.layers inside. So now in this code example you can see that our code is only explicitly declaring the X and the Y which are the placeholders for the data and the labels. And now we say that H=tf.layers.dense, we give it the input X and we tell it units=H. This is again kind of a magical line because inside this line, it's kind of setting up w1 and b1, the bias, it's setting up variables for those with the right shapes that are kind of inside the graph but a little bit hidden from us. And it's using this xavier initializer object to set up an initialization strategy for those. So before we were doing that explicitly ourselves with the tf.randomnormal business, but now here it's kind of handling some of those details for us and it's just spitting out an H, which is again the same sort of H that we saw in the previous layer, it's just doing some of those details for us. And you can see here, we're also passing an activation=tf.nn.relu so it's even doing the activation, the relu activation function inside this layer for us. So it's taking care of a lot of these architectural details for us. Question? [student's words obscured due to lack of microphone] Question is does the xavier initializer default to particular distribution? I'm sure it has some default, I'm not sure what it is. I think you'll have to look at the documentation. But it seems to be a reasonable strategy, I guess. And in fact if you run this code, it converges much faster than the previous one because the initialization is better. And you can see that we're using two calls to tf.layers and this lets us build our model without doing all these explicit bookkeeping details ourself. So this is maybe a little bit more convenient. But tf.contrib.layer is really not the only game in town. There's like a lot of different higher level libraries that people build on top of TensorFlow. And it's kind of due to this basic impotence mis-match where the computational graph is relatively low level thing, but when we're working with neural networks we have this concept of layers and weights and some layers have weights associated with them, and we typically think at a slightly higher level of abstraction than this raw computational graph. So that's what these various packages are trying to help you out and let you work at this higher layer of abstraction. So another very popular package that you may have seen before is Keras. Keras is a very beautiful, nice API that sits on top of TensorFlow and handles sort of building up these computational graph for you up in the back end. By the way, Keras also supports Theano as a back end, so that's also kind of nice. And in this example you can see we build the model as a sequence of layers. We build some optimizer object and we call model.compile and this does a lot of magic in the back end to build the graph. And now we can call model.fit and that does the whole training procedure for us magically. So I don't know all the details of how this works, but I know Keras is very popular, so you might consider using it if you're talking about TensorFlow. Question? [student's words obscured due to lack of microphone] Yeah, so the question is like why there's no explicit CPU, GPU going on here. So I've kind of left that out to keep the code clean. But you saw at the beginning examples it was pretty easy to flop all these things between CPU and GPU and there was either some global flag or some different data type or some with statement and it's usually relatively simple and just about one line to swap in each case. But exactly what that line looks like differs a bit depending on the situation. So there's actually like this whole large set of higher level TensorFlow wrappers that you might see out there in the wild. And it seems that like even people within Google can't really agree on which one is the right one to use. So Keras and TFLearn are third party libraries that are out there on the internet by other people. But there's these three different ones, tf.layers, TF-Slim and tf.contrib.learn that all ship with TensorFlow, that are all kind of doing a slightly different version of this higher level wrapper thing. There's another framework also from Google, but not shipping with TensorFlow called Pretty Tensor that does the same sort of thing. And I guess none of these were good enough for DeepMind, because they went ahead a couple weeks ago and wrote and released their very own high level TensorFlow wrapper called Sonnet. So I wouldn't begrudge you if you were kind of confused by all these things. There's a lot of different choices. They don't always play nicely with each other. But you have a lot of options, so that's good. TensorFlow has pretrained models. There's some examples in TF-Slim, and in Keras. 'Cause remember retrained models are super important when you're training your own things. There's also this idea of Tensorboard where you can load up your, I don't want to get into details, but Tensorboard you can add sort of instrumentation to your code and then plot losses and things as you go through the training process. TensorFlow also let's you run distributed where you can break up a computational graph run on different machines. That's super cool but I think probably not anyone outside of Google is really using that to great success these days, but if you do want to run distributed stuff probably TensorFlow is the main game in town for that. A side note is that a lot of the design of TensorFlow is kind of spiritually inspired by this earlier framework called Theano from Montreal. I don't want to go through the details here, just if you go through these slides on your own, you can see that the code for Theano ends up looking very similar to TensorFlow. Where we define some variables, we do some forward pass, we compute some gradients, and we compile some function, then we run the function over and over to train the network. So it kind of looks a lot like TensorFlow. So we still have a lot to get through, so I'm going to move on to PyTorch and maybe take questions at the end. So, PyTorch from Facebook is kind of different from TensorFlow in that we have sort of three explicit different layers of abstraction inside PyTorch. So PyTorch has this tensor object which is just like a Numpy array. It's just an imperative array, it doesn't know anything about deep learning, but it can run with GPU. We have this variable object which is a node in a computational graph which builds up computational graphs, lets you compute gradients, that sort of thing. And we have a module object which is a neural network layer that you can compose together these modules to build big networks. So if you kind of want to think about rough equivalents between PyTorch and TensorFlow you can think of the PyTorch tensor as fulfilling the same role as the Numpy array in TensorFlow. The PyTorch variable is similar to the TensorFlow tensor or variable or placeholder, which are all sort of nodes in a computational graph. And now the PyTorch module is kind of equivalent to these higher level things from tf.slim or tf.layers or sonnet or these other higher level frameworks. So right away one thing to notice about PyTorch is that because it ships with this high level abstraction and like one really nice higher level abstraction called modules on its own, there's sort of less choice involved. Just stick with nnmodules and you'll be good to go. You don't need to worry about which higher level wrapper to use. So PyTorch tensors, as I said, are just like Numpy arrays so here on the right we've done an entire two layer network using entirely PyTorch tensors. One thing to note is that we're not importing Numpy here at all anymore. We're just doing all these operations using PyTorch tensors. And this code looks exactly like the two layer net code that you wrote in Numpy on the first homework. So you set up some random data, you use some operations to compute the forward pass. And then we're explicitly viewing the backward pass ourself. Just sort of backhopping through the network, through the operations, just as you did on homework one. And now we're doing a manual update of the weights using a learning rate and using our computed gradients. But the major difference between the PyTorch tensor and Numpy arrays is that they run on GPU so all you have to do to make this code run on GPU is use a different data type. Rather than using torch.FloatTensor, you do torch.cuda.FloatTensor, cast all of your tensors to this new datatype and everything runs magically on the GPU. You should think of PyTorch tensors as just Numpy plus GPU. That's exactly what it is, nothing specific to deep learning. So the next layer of abstraction in PyTorch is the variable. So this is, once we moved from tensors to variables now we're building computational graphs and we're able to take gradients automatically and everything like that. So here, if X is a variable, then x.data is a tensor and x.grad is another variable containing the gradients of the loss with respect to that tensor. So x.grad.data is an actual tensor containing those gradients. And PyTorch tensors and variables have the exact same API. So any code that worked on PyTorch tensors you can just make them variables instead and run the same code, except now you're building up a computational graph rather than just doing these imperative operations. So here when we create these variables each call to the variable constructor wraps a PyTorch tensor and then also gives a flag whether or not we want to compute gradients with respect to this variable. And now in the forward pass it looks exactly like it did before in the variable in the case with tensors because they have the same API. So now we're computing our predictions, we're computing our loss in kind of this imperative kind of way. And then we call loss.backwards and now all these gradients come out for us. And then we can make a gradient update step on our weights using the gradients that are now present in the w1.grad.data. So this ends up looking quite like the Numpy case, except all the gradients come for free. One thing to note that's kind of different between PyTorch and TensorFlow is that in a TensorFlow case we were building up this explicit graph, then running the graph many times. Here in PyTorch, instead we're building up a new graph every time we do a forward pass. And this makes the code look a bit cleaner. And it has some other implications that we'll get to in a bit. So in PyTorch you can define your own new autograd functions by defining the forward and backward in terms of tensors. This ends up looking kind of like the module layers code that you write for homework two. Where you can implement forward and backward using tensor operations and then stick these things inside computational graph. So here we're defining our own relu and then we can actually go in and use our own relu operation and now stick it inside our computational graph and define our own operations this way. But most of the time you will probably not need to define your own autograd operations. Most of the times the operations you need will mostly be already implemented for you. So in TensorFlow we saw, if we can move to something like Keras or TF.Learn and this gives us a higher level API to work with, rather than this raw computational graphs. The equivalent in PyTorch is the nn package. Where it provides these high level wrappers for working with these things. But unlike TensorFlow there's only one of them. And it works pretty well, so just use that if you're using PyTorch. So here, this ends up kind of looking like Keras where we define our model as some sequence of layers. Our linear and relu operations. And we use some loss function defined in the nn package that's our mean squared error loss. And now inside each iteration of our loop we can run data forward through the model to get our predictions. We can run the predictions forward through the loss function to get our scale or loss, then we can call loss.backward, get all our gradients for free and then loop over the parameters of the models and do our explicit gradient descent step to update the models. And again we see that we're sort of building up this new computational graph every time we do a forward pass. And just like we saw in TensorFlow, PyTorch provides these optimizer operations that kind of abstract away this updating logic and implement fancier update rules like Adam and whatnot. So here we're constructing an optimizer object telling it that we want it to optimize over the parameters of the model. Giving it some learning rate under the hyper parameters. And now after we compute our gradients we can just call optimizer.step and it updates all the parameters of the model for us right here. So another common thing you'll do in PyTorch a lot is define your own nn modules. So typically you'll write your own class which defines you entire model as a single new nn module class. And a module is just kind of a neural network layer that can contain either other other modules or trainable weights or other other kinds of state. So in this case we can redo the two layer net example by defining our own nn module class. So now here in the initializer of the class we're assigning this linear1 and linear2. We're constructing these new module objects and then store them inside of our own class. And now in the forward pass we can use both our own internal modules as well as arbitrary autograd operations on variables to compute the output of our network. So here we receive the, inside this forward method here, the input acts as a variable, then we pass the variable to our self.linear1 for the first layer. We use an autograd op clamp to complete the relu, we pass the output of that to the second linear and then that gives us our output. And now the rest of this code for training this thing looks pretty much the same. Where we build an optimizer and loop over and on ever iteration feed data to the model, compute the gradients with loss.backwards, call optimizer.step. So this is like relatively characteristic of what you might see in a lot of PyTorch type training scenarios. Where you define your own class, defining your own model that contains other modules and whatnot and then you have some explicit training loop like this that runs it and updates it. One kind of nice quality of life thing that you have in PyTorch is a dataloader. So a dataloader can handle building minibatches for you. It can handle some of the multi-threading that we talked about for you, where it can actually use multiple threads in the background to build many batches for you and stream off disk. So here a dataloader wraps a dataset and provides some of these abstractions for you. And in practice when you want to run your own data, you typically will write your own dataset class which knows how to read your particular type of data off whatever source you want and then wrap it in a data loader and train with that. So, here we can see that now we're iterating over the dataloader object and at every iteration this is yielding minibatches of data. And it's internally handling the shuffling of the data and multithreaded dataloading and all this sort of stuff for you. So this is kind of a completely PyTorch example and a lot of PyTorch training code ends up looking something like this. PyTorch provides pretrained models. And this is probably the slickest pretrained model experience I've ever seen. You just say torchvision.models.alexnet pretained=true. That'll go down in the background, download the pretrained weights for you if you don't already have them, and then it's right there, you're good to go. So this is super easy to use. PyTorch also has, there's also a package called Visdom that lets you visualize some of these loss statistics somewhat similar to Tensorboard. So that's kind of nice, I haven't actually gotten a chance to play around with this myself so I can't really speak to how useful it is, but one of the major differences between Tensorboard and Visdom is that Tensorboard actually lets you visualize the structure of the computational graph. Which is really cool, a really useful debugging strategy. And Visdom does not have that functionality yet. But I've never really used this myself so I can't really speak to its utility. As a bit of an aside, PyTorch is kind of an evolution of, kind of a newer updated version of an older framework called Torch which I worked with a lot in the last couple of years. And I don't want to go through the details here, but PyTorch is pretty much better in a lot of ways than the old Lua Torch, but they actually share a lot of the same back end C code for computing with tensors and GPU operations on tensors and whatnot. So if you look through this Torch example, some of it ends up looking kind of similar to PyTorch, some of it's a bit different. Maybe you can step through this offline. But kind of the high level differences between Torch and PyTorch are that Torch is actually in Lua, not Python, unlike these other things. So learning Lua is a bit of a turn off for some people. Torch doesn't have autograd. Torch is also older, so it's more stable, less susceptible to bugs, there's maybe more example code for Torch. They're about the same speeds, that's not really a concern. But in PyTorch it's in Python which is great, you've got autograd which makes it a lot simpler to write complex models. In Lua Torch you end up writing a lot of your own back prop code sometimes, so that's a little bit annoying. But PyTorch is newer, there's less existing code, it's still subject to change. So it's a little bit more of an adventure. But at least for me, I kind of prefer, I don't really see much reason for myself to use Torch over PyTorch anymore at this time. So I'm pretty much using PyTorch exclusively for all my work these days. We talked about this a little bit about this idea of static versus dynamic graphs. And this is one of the main distinguishing features between PyTorch and TensorFlow. So we saw in TensorFlow you have these two stages of operation where first you build up this computational graph, then you run the computational graph over and over again many many times reusing that same graph. That's called a static computational graph 'cause there's only one of them. And we saw PyTorch is quite different where we're actually building up this new computational graph, this new fresh thing on every forward pass. That's called a dynamic computational graph. For kind of simple cases, with kind of feed forward neural networks, it doesn't really make a huge difference, the code ends up kind of similarly and they work kind of similarly, but I do want to talk a bit about some of the implications of static versus dynamic. And what are the tradeoffs of those two. So one kind of nice idea with static graphs is that because we're kind of building up one computational graph once, and then reusing it many times, the framework might have the opportunity to go in and do optimizations on that graph. And kind of fuse some operations, reorder some operations, figure out the most efficient way to operate that graph so it can be really efficient. And because we're going to reuse that graph many times, maybe that optimization process is expensive up front, but we can amortize that cost with the speedups that we've gotten when we run the graph many many times. So as kind of a concrete example, maybe if you write some graph which has convolution and relu operations kind of one after another, you might imagine that some fancy graph optimizer could go in and actually output, like emit custom code which has fused operations, fusing the convolution and the relu so now it's computing the same thing as the code you wrote, but now might be able to be executed more efficiently. So I'm not too sure on exactly what the state in practice of TensorFlow graph optimization is right now, but at least in principle, this is one place where static graph really, you can have the potential for doing this optimization in static graphs where maybe it would be not so tractable for dynamic graphs. Another kind of subtle point about static versus dynamic is this idea of serialization. So with a static graph you can imagine that you write this code that builds up the graph and then once you've built the graph, you have this data structure in memory that represents the entire structure of your network. And now you could take that data structure and just serialize it to disk. And now you've got the whole structure of your network saved in some file. And then you could later rear load that thing and then run that computational graph without access to the original code that built it. So this would be kind of nice in a deployment scenario. You might imagine that you might want to train your network in Python because it's maybe easier to work with, but then after you serialize that network and then you could deploy it now in maybe a C++ environment where you don't need to use the original code that built the graph. So that's kind of a nice advantage of static graphs. Whereas with a dynamic graph, because we're interleaving these processes of graph building and graph execution, you kind of need the original code at all times if you want to reuse that model in the future. On the other hand, some advantages for dynamic graphs are that it kind of makes, it just makes your code a lot cleaner and a lot easier in a lot of scenarios. So for example, suppose that we want to do some conditional operation where depending on the value of some variable Z, we want to do different operations to compute Y. Where if Z is positive, we want to use one weight matrix, if Z is negative we want to use a different weight matrix. And we just want to switch off between these two alternatives. In PyTorch because we're using dynamic graphs, it's super simple. Your code kind of looks exactly like you would expect, exactly what you would do in Numpy. You can just use normal Python control flow to handle this thing. And now because we're building up the graph each time, each time we perform this operation will take one of the two paths and build up maybe a different graph on each forward pass, but for any graph that we do end up building up, we can back propagate through it just fine. And the code is very clean, easy to work with. Now in TensorFlow the situations is a little bit more complicated because we build the graph once, this control flow operator kind of needs to be an explicit operator in the TensorFlow graph. And now, so them you can see that we have this tf.cond call which is kind of like a TensorFlow version of an if statement, but now it's baked into the computational graph rather than using sort of Python control flow. And the problem is that because we only build the graph once, all the potential paths of control flow that our program might flow through need to be baked into the graph at the time we construct it before we ever run it. So that means that any kind of control flow operators that you want to have need to be not Python control flow operators, you need to use some kind of magic, special tensor flow operations to do control flow. In this case this tf.cond. Another kind of similar situation happens if you want to have loops. So suppose that we want to compute some kind of recurrent relationships where maybe Y T is equal to Y T minus one plus X T times some weight matrix W and depending on each time we do this, every time we compute this, we might have a different sized sequence of data. And no matter the length of our sequence of data, we just want to compute this same recurrence relation no matter the size of the input sequence. So in PyTorch this is super easy. We can just kind of use a normal for loop in Python to just loop over the number of times that we want to unroll and now depending on the size of the input data, our computational graph will end up as different sizes, but that's fine, we can just back propagate through each one, one at a time. Now in TensorFlow this becomes a little bit uglier. And again, because we need to construct the graph all at once up front, this control flow looping construct again needs to be an explicit node in the TensorFlow graph. So I hope you remember your functional programming because you'll have to use those kinds of operators to implement looping constructs in TensorFlow. So in this case, for this particular recurrence relationship you can use a foldl operation and pass in, sort of implement this particular loop in terms of a foldl. But what this basically means is that you have this sense that TensorFlow is almost building its own entire programming language, using the language of computational graphs. And any kind of control flow operator, or any kind of data structure needs to be rolled into the computational graph so you can't really utilize all your favorite paradigms for working imperatively in Python. You kind of need to relearn a whole separate set of control flow operators. And if you want to do any kinds of control flow inside your computational graph using TensorFlow. So at least for me, I find that kind of confusing, a little bit hard to wrap my head around sometimes, and I kind of like that using PyTorch dynamic graphs, you can just use your favorite imperative programming constructs and it all works just fine. By the way, there actually is some very new library called TensorFlow Fold which is another one of these layers on top of TensorFlow that lets you implement dynamic graphs, you kind of write your own code using TensorFlow Fold that looks kind of like a dynamic graph operation and then TensorFlow Fold does some magic for you and somehow implements that in terms of the static TensorFlow graphs. This is a super new paper that's being presented at ICLR this week in France. So I haven't had the chance to like dive in and play with this yet. But my initial impression was that it does add some amount of dynamic graphs to TensorFlow but it is still a bit more awkward to work with than the sort of native dynamic graphs you have in PyTorch. So then, I thought it might be nice to motivate like why would we care about dynamic graphs in general? So one option is recurrent networks. So you can see that for something like image captioning we use a recurrent network which operates over sequences of different lengths. In this case, the sentence that we want to generate as a caption is a sequence and that sequence can vary depending on our input data. So now you can see that we have this dynamism in the thing where depending on the size of the sentence, our computational graph might need to have more or fewer elements. So that's one kind of common application of dynamic graphs. For those of you who took CS224N last quarter, you saw this idea of recursive networks where sometimes in natural language processing you might, for example, compute a parsed tree of a sentence and then you want to have a neural network kind of operate recursively up this parse tree. So having a neural network that kind of works, it's not just a sequential sequence of layers, but instead it's kind of working over some graph or tree structure instead where now each data point might have a different graph or tree structure so the structure of the computational graph then kind of mirrors the structure of the input data. And it could vary from data point to data point. So this type of thing seems kind of complicated and hairy to implement using TensorFlow, but in PyTorch you can just kind of use like normal Python control flow and it'll work out just fine. Another bit of more researchy application is this really cool idea that I like called neuromodule networks for visual question answering. So here the idea is that we want to ask some questions about images where we maybe input this image of cats and dogs, there's some question, what color is the cat, and then internally the system can read the question and that has these different specialized neural network modules for performing operations like asking for colors and finding cats. And then depending on the text of the question, it can compile this custom architecture for answering the question. And now if we asked a different question, like are there more cats than dogs? Now we have maybe the same basic set of modules for doing things like finding cats and dogs and counting, but they're arranged in a different order. So we get this dynamism again where different data points might give rise to different computational graphs. But this is a bit more of a researchy thing and maybe not so main stream right now. But as kind of a bigger point, I think that there's a lot of cool, creative applications that people could do with dynamic computational graphs and maybe there aren't so many right now, just because it's been so painful to work with them. So I think that there's a lot of opportunity for doing cool, creative things with dynamic computational graphs. And maybe if you come up with cool ideas, we'll feature it in lecture next year. So I wanted to talk very briefly about Caffe which is this framework from Berkeley. Which Caffe is somewhat different from the other deep learning frameworks where you in many cases you can actually train networks without writing any code yourself. You kind of just call into these pre-existing binaries, set up some configuration files and in many cases you can train on data without writing any of your own code. So, you may be first, you convert your data into some format like HDF5 or LMDB and there exists some scripts inside Caffe that can just convert like folders of images and text files into these formats for you. You need to define, now instead of writing code to define the structure of your computational graph, instead you edit some text file called a prototxt which sets up the structure of the computational graph. Here the structure is that we read from some input HDF5 file, we perform some inner product, we compute some loss and the whole structure of the graph is set up in this text file. One kind of downside here is that these files can get really ugly for very large networks. So for something like the 152 layer ResNet model, which by the way was trained in Caffe originally, then this prototxt file ends up almost 7000 lines long. So people are not writing these by hand. People will sometimes will like write python scripts to generate these prototxt files. [laughter] Then you're kind in the realm of rolling your own computational graph abstraction. That's probably not a good idea, but I've seen that before. Then, rather than having some optimizer object, instead there's some solver, you define some solver things inside another prototxt. This defines your learning rate, your optimization algorithm and whatnot. And then once you do all these things, you can just run the Caffe binary with the train command and it all happens magically. Cafee has a model zoo with a bunch of pretrained models, that's pretty useful. Caffe has a Python interface but it's not super well documented. You kind of need to read the source code of the python interface to see what it can do, so that's kind of annoying. But it does work. So, kind of my general thing about Caffe is that it's maybe good for feed forward models, it's maybe good for production scenarios, because it doesn't depend on Python. But probably for research these days, I've seen Caffe being used maybe a little bit less. Although I think it is still pretty commonly used in industry again for production. I promise one slide, one or two slides on Caffe 2. So Caffe 2 is the successor to Caffe which is from Facebook. It's super new, it was only released a week ago. [laughter] So I really haven't had the time to form a super educated opinion about Caffe 2 yet, but it uses static graphs kind of similar to TensorFlow. Kind of like Caffe one the core is written in C++ and they have some Python interface. The difference is that now you no longer need to write your own Python scripts to generate prototxt files. You can kind of define your computational graph structure all in Python, kind of looking with an API that looks kind of like TensorFlow. But then you can spit out, you can serialize this computational graph structure to a prototxt file. And then once your model is trained and whatnot, then we get this benefit that we talked about of static graphs where you can, you don't need the original training code now in order to deploy a trained model. So one interesting thing is that you've seen Google maybe has one major deep running framework, which is TensorFlow, where Facebook has these two, PyTorch and Caffe 2. So these are kind of different philosophies. Google's kind of trying to build one framework to rule them all that maybe works for every possible scenario for deep learning. This is kind of nice because it consolidates all efforts onto one framework. It means you only need to learn one thing and it'll work across many different scenarios including like distributed systems, production, deployment, mobile, research, everything. Only need to learn one framework to do all these things. Whereas Facebook is taking a bit of a different approach. Where PyTorch is really more specialized, more geared towards research so in terms of writing research code and quickly iterating on your ideas, that's super easy in PyTorch, but for things like running in production, running on mobile devices, PyTorch doesn't have a lot of great support. Instead, Caffe 2 is kind of geared toward those more production oriented use cases. So my kind of general study, my general, overall advice about like which framework to use for which problems is kind of that both, I think TensorFlow is a pretty safe bet for just about any project that you want to start new, right? Because it is sort of one framework to rule them all, it can be used for just about any circumstance. However, you probably need to pair it with a higher level wrapper and if you want dynamic graphs, you're maybe out of luck. Some of the code ends up looking a little bit uglier in my opinion, but maybe that's kind of a cosmetic detail and it doesn't really matter that much. I personally think PyTorch is really great for research. If you're focused on just writing research code, I think PyTorch is a great choice. But it's a bit newer, has less community support, less code out there, so it could be a bit of an adventure. If you want more of a well trodden path, TensorFlow might be a better choice. If you're interested in production deployment, you should probably look at Caffe, Caffe 2 or TensorFlow. And if you're really focused on mobile deployment, I think TensorFlow and Caffe 2 both have some built in support for that. So it's kind of unfortunately, there's not just like one global best framework, it kind of depends on what you're actually trying to do, what applications you anticipate but theses are kind of my general advice on those things. So next time we'll talk about some case studies about various CNN architectures.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_15_Efficient_Methods_and_Hardware_for_Deep_Learning.txt
- Hello everyone, welcome to CS231. I'm Song Han. Today I'm going to give a guest lecture on the efficient methods and hardware for deep learning. So I'm a fifth year PhD candidate here at Stanford, advised by Professor Bill Dally. So, in this course we have seen a lot of convolution neural networks, recurrent neural networks, or even since last time, the reinforcement learning. They are spanning a lot of applications. For example, the self-=driving car, machine translation, AlphaGo and Smart Robots. And it's changing our lives, but there is a recent trend that in order to achieve such high accuracy, the models are getting larger and larger. For example for ImageNet recognition, the winner from 2012 to 2015, the model size increased by 16X. And just in one year, for Baidu's deep speech just in one year, the training operations, the number of training operations increased by 10X. So such large model creates lots of problems, for example the model size becomes larger and larger so it's difficult for them to be deployed either on those for example, on the mobile phones. If the item is larger than 100 megabytes, you cannot download until you connect to Wi-Fi. So those product managers and for example Baidu, Facebook, they are very sensitive to the size of the binary size of their model. And also for example, the self-driving car, you can only do those on over-the-air update for the model if the model is too large, it's also difficult. And the second challenge for those large models is that the training speed is extremely slow. For example, the ResNet152, which is only a few, less than 1% actually, more accurate than ResNet101. Takes 1.5 weeks to train on four Maxwell M40 GPUs for example. Which greatly limits either we are doing homework or if the researcher's designing new models is getting pretty slow. And the third challenge for those bulky model is the energy efficiency. For example, the AlphaGo beating Lee Sedol last year, took 2000 CPUs and 300 GPUs, which cost $3,000 just to pay for the electric bill, which is insane. So either on those embedded devices, those models are draining your battery power for on data-center increases the total cost of ownership of maintaining a large data-center. For example, Google in their blog, they mentioned if all the users using the Google Voice Search for just three minutes, they have to double their data-center. So that's a large cost. So reducing such cost is very important. And let's see where is actually the energy consumed. The large model means lots of memory access. You have to access, load those models from the memory means more energy. If you look at how much energy is consumed by loading the memory versus how much is consumed by multiplications and add those arithmetic operations, the memory access is more than two or three orders of magnitude, more energy consuming than those arithmetic operations. So how to make deep learning more efficient. So we have to improve energy efficiency by this Algorithm and Hardware Co-Design. So this is the previous way, which is our hardware. For example, we have some benchmarks say Spec 2006 and then run those benchmarks and tune your CPU architectures for those benchmarks. Now what we should do is to open up the box to see what can we do from algorithm side first and see what is the optimum question mark processing unit. That breaks the boundary between the algorithm hardware to improve the overall efficiency. So today's talk, I'm going to have the following agenda. We are going to cover four aspects: The algorithm hardware and inference and training. So they form a small two by two matrix, so includes the algorithm for efficient inference, hardware for efficient inference and the algorithm for efficient training, and lastly, the hardware for efficient training. For example, I'm going to cover the TPU, I'm going to cover the Volta. But before I cover those things, let's have three slides for Hardware 101. A brief introduction of the families of hardware in such a tree. So in general, we can have roughly two branches. One is general purpose hardware. It can do any applications versus the specialized hardware, which is tuned for a specific kind of applications, a domain of applications. So the general purpose hardware includes, the CPU or the GPU, and their difference is that CPU is latency oriented, single threaded. It's like a big elephant. While the GPU is throughput oriented. It has many small though weak threads, but there are thousands of such small weak cores. Like a group of small ants, where there are so many ants. And specialized hardware, roughly there are FPGAs and ASICs. So FPGA stand for Field Programmable Gate Array. So it is programmable, hardware programmable so its logic can be changed. So it's cheaper for you to try new ideas and do prototype, but it's less efficient. It's in the middle between the general purpose and pure ASIC. So ASIC stands for Application Specific Integrated Circuit. It has a fixed logic, just designed for a certain application. For example deep learning. And Google's TPU is a kind of ASIC and the neural networks we train on, the earlier GPUs is here. And another slide for Hardware 101 is the number representations. So in this slide, I'm going to convey you the idea that all the numbers in computer are not represented by a real number. It's not a real number, but they are actually discrete. Even for those floating point with your 32 Bit. Floating point numbers, their resolution is not perfect. It's not continuous, but it's discrete. So for example FP32, meaning using a 32 bit to represent a floating point number. So there are three components in the representation. The sign bit, the exponent bit, the mantissa, and the number it represents is shown by minus 1 to the S times 1.M times 2 to the exponent. So similar there is FP16, using a 16 bit to represent a floating point number. In particular, I'm going to introduce Int8, where the core TPU use, using an integer to represent a fixed point number. So we have a certain number of bits for the integer. Followed by a radix point, if we put different layers. And lastly, the fractional bits. So why do we prefer those eight bit, or 16 bit rather than those traditional like the 32 bit floating point. That's the cost. So, I generated the figure from 45 nanometer technology about the energy cost versus the area cost for different operations. In particular, let's see here, go you from 32 bit to 16 bit, we have about four times reduction in energy and also about four times reduction in the area. Area means money. Every millimeter square takes money to take out a chip So it's very beneficial for hardware design to go from 32 bit to 16 bit. That's why you hear NVIDIA from Pascal Architecture, they said they're starting to support FP16. That's the reason why it's so beneficial. For example, previous battery level could last four hours, now it becomes 16 hours. That's what it means to reduce the energy cost by four times. But here still, there's a problem of large energy costs for reading the memory. And let's see how can we deal with this memory reference so expensive, how do we deal with this problem better? So let's switch gear and come to our topic directly. So let's first introduce algorithm for efficient inference. So I'm going to cover six topics, this is a really long slide. So I'm going to relatively fast. So the first idea I'm going to talk about is pruning. Pruning the neural networks. For example, this is original neural network. So what I'm trying to do is, can we remove some of the weight and still have the same accuracy? It's like pruning a tree, get rid of those redundant connections. This is first proposed by Professor Yann LeCun back in 1989, and I revisited this problem, 26 years later, on those modern deep neural nets to see how it works. So not all parameters are useful actually. For example, in this case, if you want to fit a single line, but you're using a quadratic term, apparently the 0.01 is a redundant parameter. So I'm going to train the connectivity first and then prune some of the connections. And then train the remaining weights, and through this process, it regulates. And as a result, I can reduce the number of connections, and annex that from 16 million parameters to only six million parameters, which is 10 times less the computation. So this is the accuracy. So the x-axis is how much parameters to prune away and the y-axis is the accuracy you have. So we want to have less parameters, but we also want to have the same accuracy as before. We don't want to sacrifice accuracy, For example at 80%, we locked zero away left 80% of the parameters, but accuracy jumped by 4%. That's intolerable. But the good thing is that if we retrain the remaining weights, the accuracy can fully recover here. And if we do this process iteratively by pruning and retraining, pruning and retraining, we can fully recover the accuracy not until we are prune away 90% of the parameters. So if you go back to home and try it on your Ipad or notebook, just zero away 50% of the parameters say you went on your homework, you will astonishingly find that accuracy actually doesn't hurt. So we just mentioned convolution neural nets, how about RNNs and LSTMs, so I tried with this neural talk. Again, pruning away 90% of the rates doesn't hurt the blue score. And here are some visualizations. For example, the original picture, the neural talk says a basketball player in a white uniform is playing with a ball. Versus pruning away 90% it says, a basketball player in a white uniform is playing with a basketball. And on and so on. But if you're too aggressive, say you prune away 95% of the weights, the network is going to get drunk. It says, a man in a red shirt and white and black shirt is running through a field. So there's really a limit, a threshold, you have to take care of during the pruning. So interestingly, after I did the work, did some resource and research and find actually the same pruning procedure actually happens to human brain as well. So when we were born, there are about 50 trillion synapses in the brain. And at one year old, this number surged into 1,000 trillion. And as we become adolescent, it becomes smaller actually, 500 trillion in the end, according to the study by Nature. So this is very interesting. And also, the pruning changed the weight distribution because we are removing those small connections and after we retrain them, that's why it becomes soft in the end. Yeah, question. - [Student] Are you trying to mean that it terms of your mixed weights during the training will be just set at zero and just start from scratch? And these start from the things that are at zero. - Yeah. So the question is, how do we deal with those zero connections? So we force them to be zero in all the other iterations. Question? - [Student] How do you pick which rates to drop? - Yeah so very simple. Small weights, drop it, sort it. If it's small, just-- - [Student] Any threshold that I decide? - Exactly, yeah. So the next idea, weight sharing. So now we have, remember our end goal is to remove connections so that we can have less memory footprint so that we can have more energy efficient deployment. Now we have less number of parameters by pruning. We want to have less number of bits per parameter so they're multiplied together they get a small model. So the idea is like this. Not all numbers, not all the weights has to be the exact number. For example, 2.09, 2.12 or all these four weights, you just put them using 2.0 to represent them. That's enough. Otherwise too accurate number is just leads to overfitting. So the idea is I can cluster the weights if they are similar, just using a centroid to represent the number instead of using the full precision weight. So that every time I do the inference, I just do inference on this single number. For example, this is a four by four weight matrix in a certain layer. And what I'm going to do is do k-means clustering by having the similar weight sharing the same centroid. For example, 2.09, 2.12, I store index of three pointing to here. So that, the good thing is we need to only store the two bit index rather than the 32 bit, floating point number. That's 16 times saving. And how do we train such neural network? They are binded together, so after we get the gradient, we color them in the same pattern as the weight and then we do a group by operation by having all the in that weights with the same index grouped together. And then we do a reduction by summing them up. And then multiplied by the learning rate subtracted from the original centroid. That's one iteration of the SGD for such weight shared neural network. So remember previously, after pruning this is what the weight distribution like and after weight sharing, they become discrete. There are only 16 different values here, meaning we can use four bits to represent each number. And by training on such weight shared neural network, training on such extremely shared neural network, these weights can adjust. It is the subtle changes that compensated for the loss of accuracy. So let's see, this is the number of bits we give it, this is the accuracy for convolution layers. Not until four bits, does the accuracy begin to drop and for those fully connected layers, very astonishingly, it's not until two bits, only four number, does the accuracy begins to drop. And this result is per layer. So we have covered two methods, pruning and weight sharing. What if we combine these two methods together. Do they work well? So by combining those methods, this is the compression ratio with the smaller on the left. And this is the accuracy. We can combine it together and make the model about 3% of its original size without hurting the accuracy at all. Compared with the each working individual data by 10%, accuracy begins to drop. And compared with the cheap SVD method, this has a better compression ratio. And final idea is we can apply the Huffman Coding to use more number of bits for those infrequent numbers, infrequently appearing weights and less number of bits for those more frequently appearing weights. So by combining these three methods, pruning, weight sharing, and also Huffman Coding, we can compress the neural networks, state-of-the-art neural networks, ranging from 10x to 49x without hurting the prediction accuracy. Sometimes a little bit better. But maybe that is noise. So the next question is, these models are just pre-trained models by say Google, Microsoft. Can we make a compact model, a pump compact model to begin with? Even before such compression? So SqueezeNet, you may have already worked with this neural network model in a homework. So the idea is we are having a squeeze layer here to shield at the three by three convolution with fewer number of channels. So that's where squeeze comes from. And here we have two branches, rather than four branches as in the inception model. So as a result, the model is extremely compact. It doesn't have any fully connected layers. Everything is fully convolutional. The last layer is a global pooling. So what if we apply deep compression algorithm on such already compact model will it be getting even smaller? So this is AlexNet after compression, this is SqueezeNet. Even before compression, it's 50x smaller than AlexNet, but has the same accuracy. After compression 510x smaller, but the same accuracy only less than half a megabyte. This means it's very easy to fit such a small model on the cache, which is literally tens of megabyte SRAM. So what does it mean? It's possible to achieve speed up. So this is the speedup, I measured if all these fully connected layers only for now, on the CPU, GPU, and the mobile GPU, before pruning and after pruning the weights, and on average, I observed a 3x speedup in a CPU, about 3X speedup on the GPU, and roughly 5x speedup on the mobile GPU, which is a TK1. And so is the energy efficiency. In an average improvement from 3x to 6x on a CPU, GPU, and mobile GPU. And these ideas are used in these companies. Having talked about when pruning and when sharing, which is a non-linear quantization method and we're going to talk about quantization, which is, why do they use in the TPU design? All the TPU designs use at only eight bit for inference. And the way, how they can use that is because of the quantization. And let's see how does it work. So quantization has this complicated figure, but the intuition is very simple. You run the neural network and train it with the normal floating point numbers. And quantize the weight and activations by gather the statistics for each layer. For example, what is the maximum number, minimum number, and how many bits are enough to represent this dynamic range. Then you use that number of bits for the integer part and the rest of the eight bit or seven bit for the other part of the 8 bit representation. And also we can fine tune in the floating point format. Or we can also use feed forward with fixed point and back propagation with update with the floating point number. There are lots of different ideas to have better accuracy. And this is the result, for how many number of bits versus what is the accuracy. For example, using a fixed, 8 bit, the accuracy for GoogleNet doesn't drop significantly. And for VGG-16, it also remains pretty well for the accuracy. While circling down to a six bit, the accuracy begins to drop pretty dramatically. Next idea, low rank approximation. It turned out that for a convolution layer, you can break it into two convolution layers. One convolution here, followed by a one by one convolution. So that it's like you break a complicated problem into two separate small problems. This is for convolution layer. As we can see, achieving about 2x speedup, there's almost no loss of accuracy. And achieving a speedup of 5x, roughly a 6% loss of accuracy. And this also works for fully connected layers. The simplest idea is using the SVD to break it into one matrix into two matrices. And follow this idea, this paper proposes to use the Tensor Tree to break down one fully connected layer into a tree, lots of fully connected layers. That's why it's called a tree. So going even more crazy, can we use only two weights or three weights to represent a neural network? A ternary weight or a binary weight. We already seen this distribution before, after pruning. There's some positive weights and negative weights. Can we just use three numbers, just use one, minus one, zero to represent the neural network. This is our recent paper clear that we maintain a full precision weight during training time, but at inference time, we only keep the scaling factor and the ternary weight. So during inference, we only need three weights. That's very efficient and making the model very small. This is the proportion of the positive zero and negative weights, they can change during the training. So is their absolute value. And this is the visualization of kernels by this trained ternary quantization. We can see some of them are a corner detector like here. And also here. Some of them are maybe edge detector. For example, this filter some of them are corner detector like here this filter. Actually we don't need such fine grain resolution. Just three weights are enough. So this is the validation accuracy on ImageNet with AlexNet. So the threshline is the baseline accuracy with floating point 32. And the red line is our result. Pretty much the same accuracy converged compared with the full precision weights. Last idea, Winograd Transformation. So this about how do we implement deep neural nets, how do we implement the convolutions. So this is the conventional direct convolution implementation method. The slide credited to Julien, a friend from Nvidia. So originally, we just do the element wise do a dot product for those nine elements in the filter and nine elements in the image and then sum it up. For example, for every output we need nine times C number of multiplication and adds. Winograd Convolution is another method, equivalent method. It's not lost, it's an equivalent method proposed at first through this paper, Fast Algorithms for Convolution Neural Networks. That instead of directly doing the convolution, move it one by one, at first it transforms the input feature map to another feature map. Which contains only the weight, contains only 1, 0.5, 2 that can efficiently implement it with shift. And also transform the filter into a four by four tensor. So what we are going to do here is sum over c and do an element-wise element-wise product. So there are only 16 multiplications happening here. And then we do a inverse transform to get four outputs. So the transform and the inverse transform can be amortized and the multiplications, whether it can ignored. So in order to get four output, we need nine times channel times four, which is 36 times channel. Multiplications originally for the direct convolution but now we need 16 times C of our output So that is 2.25x less number of multiplications to perform the exact same multiplication. And here is a speedup. 2.25x, so theoretically, 2.25x speedup and in real, from cuDNN 5 they incorporated such Winograd Convolution algorithm. This is on the VGG net I believe, the speedup is roughly 1.7 to 2x speedup. Pretty significant. And after cuDNN 5, the cuDNN begins to use the Winograd Convolution algorithm. Okay, so far we have covered those efficient algorithms for efficient inference. We covered pruning, weight sharing, quantization, and also Winograd binary and ternary. So now let's see what is the optimal hardware for those efficient inference? And what is a Google TPU? So there are a wide range of domain specific architectures or ASICS for deep neural networks. They have a common goal is to minimize the memory access to save power. For example the Eyeriss from MIT by using the RS Dataflow to minimize the off chip direct access. And DaDiannao from China Academy of Science, buffered all the weights on chip DRAM instead of having to go to off-chip DRAM. So the TPU from Google is using eight bit integer to represent the numbers. And at Stanford I proposed the EIE architecture that support those compressed and sparse deep neural network inference. So this is what the TPU looks like. It's actually smartly, can be put into the disk drive up to four cards per server. And this is the high-level architecture for the Google TPU. Don't be overwhelmed, it's actually, the kernel part here, is this giant matrix multiplication unit. So it's a 256 by 256 matrix multiplication unit. So in one single cycle, it can perform 64 kilo those number of multiplication and accumulate operations. So running 700 Megahertz, the throughput is 92 Teraops per second because it's actually integer operation. So we just about 25x as GPU and more than 100x at the CPU. And notice, TPU has a really large software-managed on-chip buffer. It is 24 megabytes. The cache for the CPU the L3 cache is already 16 megabytes. This is 24 megabytes which is pretty large. And it's powered by two DDR3 DRAM channels. So this is a little weak because the bandwidth is only 30 gigabytes per second compared with the most recent GPU that HBM, 900 Gigabytes per second. The DDR4 is released in 2014, so that makes sense because the design is a little during that day, used the DDR3. But if you're using DDR4 or even high-bandwidth memory, the performance can be even boosted. So this is a comparison about Google's TPU compared with the CPU, GPU of this K80 GPU by the way, and the TPU. So the area is pretty much smaller, like half the size of a CPU and GPU and the power consumption is roughly 75 watts. And see this number, the peak teraops per second is much higher than the CPU and GPU is, about 90 teraops per second, which is pretty high. So here is a workload. Thanks to David sharing the slide. This is the workload at Google. They did a benchmark on these TPUs. So it's a little interesting that convolution neural nets only account for 5% of data-center workload. Most of them is multilayer perception, those fully connected layers. About 61% maybe for ads, I'm not sure. And about 29% of the workload in data-center is the Long Short Term Memory. For example, speech recognition, or machine translation, I suspect. Remember just now we have seen there are 90 teraops per second. But what actually number of teraops per second can be achieved? This is a basic tool to measure the bottleneck of a computer system. Whether you are bottlenecked by the arithmetic or you are bottlenecked by the memory bandwidth. It's like if you have a bucket, the lowest part of the bucket determines how much water we can hold in the bucket. So in this region, you are bottlenecked by the memory bandwidth. So the x-axis is the arithmetic intensity. Which is number of floating point operations per byte the ratio between the computation and memory of bandwidth overhead. So the y-axis, is the actual attainable performance. Here is the peak performance for example. When you do a lot of operation after you fetch a single piece of data, if you can do a lot of operation on top of it, then you are bottlenecked by the arithmetic. But after you fetch a lot of data from the memory, but you just do a tiny little bit of arithmetic, then you will be bottlenecked by the memory bandwidth. So how much you can fetch from the memory determines how much real performance you can get. And remember there is a ratio. When it is one here, this region it happens to be the same as the turning point is the actual memory bandwidth of your system. So let's see what is the life for the TPU. The TPU's peak performance is really high, about 90 Tops per second. For those convolution nets, they are pretty much saturating the peak performance. But there are lot of neural networks that has a utlitization less than 10%, meaning that 90 T-ops per second is actually achieves about three to 12 T-ops per second in real case. But why is it like that? The reason is, in order to have those real-time guarantee that the user not wait for too long, you cannot batch a lot of user's images or speech voice data at the same time. So as a result, for those fully connect layers, they have very little reuse, so they are bottlenecked by the memory bandwidth. For those convolution neural nets, for example this one, this blue one, that achieve 86, which is CNN0. The ratio between the ops and the number of memory is the highest. It's pretty high, more than 2,000 compared with other multilayer perceptron or long short term memory the ratio is pretty low. So this figure compares, this is the TPU and this one is the CPU, this is the GPU. Here is memory bandwidth, the peak memory bandwidth at a ratio of one here. So TPU has the highest memory bandwidth. And here is where are these neural networks lie on this curve. So the asterisk is for the TPU. It's still higher than other dots, but if you're not comfortable with this log scale figure, this is what it's like putting it in linear roofline. So pretty much everything disappeared except for the TPU results. So still, all these lines, although they are higher than the CPU and GPU, it's still way below the theoretical peak operations per second. So as I mentioned before, it is really bottlenecked by the low latency requirement so that it can have a large batch size. That's why you have low operations per byte. And how do you solve this problem? You want to have less number of memory footprint so that it can reduce the memory bandwidth requirement. One solution is to compress the model and the challenge is how do we build a hardware that can do inference directly on the compressed model? So I'm going to introduce my design of EIE, the Efficient Inference Engine, which deals with those sparse and the compressed model to save the memory bandwidth. And the rule of thumb, like we mentioned before is taking out one bit of sparsity first. Anything times zero is zero. So don't store it, don't compute on it. And second idea is, you don't need that much full precision, but you can approximate it. So by taking advantage of the sparse weight, we get about a 10x saving in the computation, 5x less memory footprint. The 2x difference is due to index overhead. And by taking advantage of the sparse activation, meaning after bandwidth, if activation is zero, then ignore it. You save another 3x of computation. And then by such weight sharing mechanism, you can use four bits to represent each weight rather than 32 bit. That's another eight times saving in the memory footprint. So this is physically, logically how the weights are stored. A four by eight matrix, and this is how physically they are stored. Only the non-zero weights are stored. So you don't need to store those zeroes. You'll save the bandwidth fetching those zeroes. And also I'm using the relative index to further save the number of memory overhead. So in the computation like this figure shows, we are running the multiplication only on non-zero. If it's zero, then skip it. Only broadcast it to the non-zero weights and if it is zero, skip it. If it's a non-zero, do the multiplication. In another cycle, do the multiplication. So the idea is anything multiplied by zero is zero. So this is a little complicated, I'm going to go very quickly. I'm going to have a lookup table that decode the four bit weight into the 16 bit weight and using the four bit relative index passed through address accumulator to get the 16 bit absolute index. And this is what the hardware architecture like in the high level. You can feel free to refer to my paper for detail. Okay speedup. So using such efficient hardware architecture and also model compression, this is the original result we have seen for CPU, GPU, mobile GPU. Now EIE is here. 189 times faster than the CPU and about 13 times faster than the GPU. So this is the energy efficiency on the log scale, it's about 24,000x more energy efficient than a CPU and about 3000x more energy efficient than a GPU. It means for example, previously if your battery can last for one hour, now it can last for 3000 hours for example. So if you say, ASIC is always better than CPUs and GPUs because it's customized hardware. So this is comparing EIE with the peer ASIC, for example DaDianNao and the TrueNorth. It has a better throughput, better energy efficiency by order of magnitude, compared with other ASICs. Not to mention that CPU, GPU and FPGAs. So we have covered half of the journey. We mentioned inference, we pretty much covered everything for inference. Now we are going to switch gear and talk about training. How do we train neural networks efficiently, how do we train it faster? So again, we are starting with algorithm first, efficient algorithms followed by the hardware for efficient training. So for efficient training algorithms, I'm going to mention four topics. The first one is parallelization, and then mixed precision training, which was just released about one month ago and at NVIDIA GTC, so it's fresh knowledge. And then model distillation, followed by my work on Dense-Sparse-Dense training, or better Regularization technique. So let's start with parallelization. So this figure shows, anyone in the hardware community. Most are very familiar with this figure. So as time goes by, what is the trend? For the number of transistors is keeping increasing. But the single threaded performance is getting plateaued in recent years. And also the frequency is getting plateaued in recent years. Because of the power constraint, to stop not scaling. And interesting thing is the number of cores is increasing. So what we really need to do is parallelization. How do we parallelize the problem to take advantage of parallel processing? Actually there are a lot of opportunities for parallelism in deep neural networks. For example, we can do data parallel. For example, feeding two images into the same model and run them at the same time. This doesn't affect latency for a single input. It doesn't make it shorter, but it makes batch size larger basically if you have four machines our effective batch size becomes four times as before. So it requires the coordinated weight update. For example, this is a paper from Google. There is a parameter server as a master and a couple of slaves running their own piece of training data and update the gradient to the parameter server and get the updated weight for them individually, that's how data parallelism is handled. Another idea is there could be a model parallelism. You can sublet your model and handle it to different processors or different threads. For example, there's this image, you want to run convolution on this image that is six dimension for loop. What you can do is you can cut the input image by two by two blocks so that each thread, or each processor handles one fourth of the image. Although there's a small halo here in between you have to take care of. And also, you can parallelize by the output or input feature map. And for those fully connect layers, how do we parallelize the model? It's even simpler. You can cut the model into half and hand it to different threads. And the third idea, you can even do hyper-parameter parallel. For example, you can tune your learning rate, your weight decay for different machines for those coarse-grained parallelism. So there are so many alternatives you have to tune. Small summary of the parallelism. There are lots of parallelisms in deep neural networks. For example, with data parallelism, you can run multiple training images, but you cannot have unlimited number of processors because you are limited by batch size. If it's too large, stochastic gradient descent becomes gradient descent, that's not good. You can also run the model parallelism. Split the model, either by cutting the image or cutting the convolution weights. Either cutting the image or cutting the fully connected layers. So it's very easy to get 16 to 64 GPUs training one model in parallel, having very good speedup. Almost linear speedup. Okay, next interesting thing, mixed precision with FP16 or FP32. So remember in the beginning of this lecture, I had a chart showing the energy and area overhead for a 16 bit versus a 32 bit. Going from 32 bit to 16 bit, you save about 4x the energy and 4x the area. So can we train a deep neural network with such low precision with floating point 16 bit rather than 32 bit? It turns out we can do that partially. By partially, I mean we need FP32 in some places. And where are those places? So we can do the multiplication in 16 bit as input. And then we have to do the summation in 32 bit accumulation. And then convert the result to 32 bit to store the weight. So that's where the mixed precision comes from. So for example, we have a master weight stored in floating point 32, we down converted it to floating point 16 and then we do the feed forward with 16 bit weight, 16 bit activation, we get a 16 bit activation here in the end when we are doing back propagation of the computation is also done with floating point 16 bit. Very interesting here, for the weights we get a floating point 16 bit gradient here for the weight. But when we are doing the update, so W plus learning rate times the gradient, that operation has to be done in 32 bit. That's where the mixed precision is coming from. And see there are two colors, which here is 16 bit, here is the 32 bit. That's where the mixed precision comes from. So does such low precision sacrifice your prediction accuracy for your model? So this is the figure from NVIDIA just released a couple of weeks ago actually. Thanks to Paulius giving me the slide. The convergence between floating point 32 versus the multi tensor up, which is basically the mixed precision training, are actually pretty much the same for convergence. If you zoom it in a little bit, they are pretty much the same. And for ResNet, the mixed precision sometimes behaves a little better than the full precision weight. Maybe because of noise. But in the end, after you train the model, this is the result of AlexNet, Inception V3, and ResNet-50 with FP32 versus FP16 mixed precision training. The accuracy is pretty much the same for these two methods. A little bit worse, but not by too much. So having talked about the mixed precision training, the next idea is to train with model distillation. For example, you can have multiple neural networks, Googlenet, Vggnet, Resnet for example. And the question is, can we take advantage of these different models? Of course we can do model ensemble, can we utilitze them as teacher, to teach a small junior neural network to have it perform as good as the senior neural network. So this is the idea. You have multiple large powerful senior neural networks to teach this student model. And hopefully it can get better results. And the idea to do that is, instead of using this hard label, for example for car, dog, cat, the probability for dog is 100%, but the output of the geometric ensemble of those large teacher neural networks maybe the dog has 90% and the cat is about 10%, and the magic happens here. You want to have a softened result label here. For example, the dog is 30%, the cat is 20%. Still the dog is higher than the cat. So the prediction is still correct, but it uses this soft label to train the student neural network rather than use this hard label to train the student neural network. And mathematically, you control how much do you make it soft by this temperature during the soft max controlling by this temperature. And the result is that, starting with the trained model that classifies 58.9% of the test frames correctly, the new model converges to 57%. Only train on 3% of the data. So that's the magic for model distillation using this soft label. And the last idea is my recent paper using a better regularization to train deep neural nets. We have seen these two figures before. We pruned the neural network, having less number of weights, but have the same accuracy. Now what I did is to recover and to retrain those weights shown in red and make everything train out together to increase the model capacity after it is trained at a low dimensional space. It's like you learn the trunk first and then gradually add those leaves and learn everything together. It turns out, on ImageNet it performs relatively about 1% to 4% absolute improvement of accuracy. And is also general purpose, works on long-short term memory and also recurrent neural nets collaborated with Baidu. So I also open sourced this special training model on the DSD Model Zoo, where there are trained, all these models, GoogleNet, VGG, ResNet, and also SqueezeNet, and also AlexNet. So if you are interested, feel free to check out this Model Zoo and compare it with the Caffe Model Zoo. Here's some examples on dense-spare-dense training helps with image capture. For example, this is a very challenging figure. The original baseline of neural talk says a boy in a red shirt is climbing a rock wall. And the sparse model says a young girl is jumping off a tree, probably mistaking the hair with either the rock or the tree. But then sparse-dense training by using this kind of regularization on a low dimensional space, it says a young girl in a pink shirt is swinging on a swing. And there are a lot of examples due to the limit of time, I will not go over them one by one. For example, a group of people are standing in front of a building, there's no building. A group of people are walking in the park. Feel free to check out the paper and see more interesting results. Okay finally, we come to hardware for efficient training. How to we take advantage of the algorithms we just mentioned. For example, parallelism, mixed precision, how are the hardware designed to actually take advantage of such features. First GPUs, this is the Nvidia PASCAL GPU, GP100, which was released last year. So it supports up to 20 Teraflops on FP16. It has 16 gigabytes of high bandwidth memory. 750 gigabytes per second. So remember, computation and memory bandwidth are the two factors determines your overall performance. Whichever is lower, it will suffer. So this is a really high bandwidth, 700 gigabytes compared with DDR3 is just 10 or 30 gigabytes per second. Consumes 300 Watts and it's done in 16 nanometer process and have a 160 gigabytes per second NV Link. So remember we have computation, we have memory, and the third thing is the communication. All three factors has to be balanced in order to achieve a good performance. So this is very powerful, but even more exciting, just about a month ago, Jensen released the newest architecture called the Volta GPUs. And let's see what is inside the Volta GPU. Just released less than a month ago, so it has 15 of FP32 teraflops and what is new here, there is 120 Tensor T-OPS, so specifically designed for deep learning. And we'll later cover what is the tensor core. And what is this 120 coming from. And rather than 750 gigabytes per second, this year, the HBM2, they are using 900 gigabytes per second memory bandwidth. Very exciting. And 12 nanometer process has a die size of more than 800 millimeters square. A really large chip and supported by 300 gigabytes per second NVLink. So what's new in Volta, the most interesting thing for us for deep learning, is this thing called Tensor Core. So what is a Tensor Core? Tensor Core is actually an instruction that can do the four by four matrix times a four by four matrix. The fused FMA stands Fused Multiplication and Add in this mixed precision operation. Just in one single clock cycle. So let's discern for a little bit what does this mean. So mixed precision is exactly as we mentioned in the last chapter, so we are having FP16 for the multiplication, but for accumulation, we are doing it with FP32. That's where the mixed precision comes from. So let's say how many operations, if it's four by four by four, it's 64 multiplications then just in one single cycle. That's 12x increase in the speedup of the Volta compared with the Pascal, which is released just less year. So this is the result for matrix multiplication on different sizes. The speedup of Volta over Pascal is roughly 3x faster doing these matrix multiplications. What we care more is not only matrix multiplication but actually running the deep neural nets. So both for training and for inference. And for training on ResNet-50, by taking advantage of this Tensor Core in this V100, it is 2.4x faster than the P100 using FP32. So on the right hand side, it compares the inference speedup, given a 7 microsecond latency requirement. What is the number of images per second it can process? It has a measurement of throughput. Again, the V100 over P100, by taking advantage of the Tensor Core, is 3.7 faster than the P100. So this figure gives roughly an idea, what is a Tensor Core, what is an integer unit, what is a floating point unit. So this whole figure is a single SM stream multiprocessor. So SM is partitioned into four processing blocks. One, two, three, four, right? And in each block there are eight FP64 cores here and 16 FP32 and 16 INT32 cores here, units here. And then there are two of the new mixed precision Tensor cores specifically designed for deep learning. And also there are the one warp scheduler, dispatch unit and Register File, as before. So what is new here is the Tensor core unit here. So here is a figure comparing the recent generations of Nvidia GPUs from Kepler to Maxwell to Pascal to Volta. We can see everything is keeping improving. For example, the boost clock has been increased from about 800 MHz to 1.4 GHz. And from the Volta generation there begins to have the Tensor core units here, which has never existed before. And before the Maxwell, the GPUs are using the GDDR5, and after the Pascal GPU, the HBM begins to came into place, the high-bandwidth memory. 750 gigabytes per second here. 900 gigabytes per second compared with DDR3, 30 gigabytes per second. And memory size actually didn't increase by too much, and the power consumption is actually also remaining roughly the same. But giving the increase of computation, you can fit them in the fixed power envelope that's still an exciting thing. And the manufacturing process is actually improving from 28 nanometer, 16 nanometer, all the way to 12 nanometer. And the chip area are also increasing to 800 millimeter-squared, that's really huge. So, you may be interested in the comparison of the GPU with the TPU, right? So how do they compare with each other? So in the original TPU paper, TPU actually designed roughly in the year of 2015, and this is comparison of the Pascal P40 GPU released in 2016. So, TPU, the power consumption is lower, is larger on chip memory of 24 megabytes, really large on-chip SRAM managed by the software. And then both of them support INT8 operations, while the inferences per second given a 10 nanometer latency the comparison for TPU is 1X. For the P40 it's about 2X. So, just last week, in the Google I/O, a new nuclear bomb is landed on the Earth. That is the Google Cloud TPU. So now TPU not only support inference, but also support training. So there is a very limited information we can get beyond this Google Blog. So their Cloud TPU delivers up to 180 teraflops to train and run machine learning models. And this is multiple Cloud TPU, making it into a TPU pod, which is built with 16 the second generation TPUs and delivers up to 11.5 teraflops of machine learning acceleration. So in the Google Blog, they mentioned that one of the large scale translation models, Google translation models, used to take a full day to train on 32 of best commercially-available GPUs, probably P40 or P100, maybe. And now it trains to the same accuracy, just within one afternoon, with just 1/8 of a TPU pod, which is pretty exciting. Okay, so as a little wrap-up. We covered a lot of stuff, we've mentioned the four dimension space of algorithm and hardware, inference and training, we covered the algorithms for inference, for example, pruning and quantization, Winograd Convolution, binary, ternary, weight sharing, for example. And then the hardware for the efficient inference. For example, the TPU, that take advantage of INT8, integer 8. And also my design of EIE accelerator that take advantage of the sparsity, anything multiplied by zero is zero, so don't store it, don't compute on it. And also the efficient algorithm for training, for example, how do we do parallelization and the most recent research on how do we use mixed precision training by taking advantage of FP16 rather than FP32 to do training which is four times saving the energy and four times saving in the area, which doesn't quite sacrifice the accuracy you'll get from the training. And also Dense-Sparse-Dense training using better regularization sparse regularization, and also the teacher-student model. You have multiple teacher on your network and have a small student network that you can distill the knowledge from the teacher in your network by a temperature. And finally we covered the hardware for efficient training and introduced two nuclear bombs. One is the Volta GPU, the other is the TPU version two, the Cloud TPU and also the amazing Tensor cores in the newest generation of Nvidia GPUs. And we also revealed the progression of a wide range, the recent Nvidia GPUs from the Kepler K40, that's actually when I started my research, what we used in the beginning, all the way to and then K40, M40, and then Pascal and then finally the exciting Volta GPU. So every year there is a nuclear bomb in the spring. Okay, a little look ahead in the future. So in the future of the city we can imagine there are a lot of AI applications using smart society, smart care, IOT devices, smart retail, for example, the Amazon Go, and also smart home, a lot of scenarios. And it poses a lot of challenges on the hardware design that requires the low latency, privacy, mobility and energy efficiency. You don't want your battery to drain very quickly. So it's both challenging and very exciting era for the code design for both the machine learning deep neural network model architectures and also the hardware architecture. So we have moved from PC era to mobile era. Now we are in the AI-First era, and hope you are as excited as I am for this kind of brain-inspired cognitive computing research. Thank you for your attention, I'm glad to take questions. [applause] We have five minutes. Of course. - [Student] Can you commercialize the deep architecture? - The architecture, yeah, some of the ideas are pretty good. I think there's opportunity. Yeah. Yeah. The question is, what can we do to make the hardware better? Oh, right, the question is how do we, the challenges and what opportunity for those small embedded devices around deep neural network or in general AI algorithms. Yeah, so those are the algorithm I discussed in the beginning about inference. Here. These are the techniques that can enable such inference or AI running on embedded devices, by having less number of weights, fewer bits per weight, and also quantization, low rank approximation. The small matrix, same accuracy, even going to binary, or ternary weights having just two bits to do the computation rather than 16 or even 32 bit and also the Winograd Transformation. Those are also the enabling algorithms for those low-power embedded devices. Okay, the question is, if it's binary weight, the software developers may be not able to take advantage of it. There is a way to take advantage of binary weight. So in one register there are 32 bit. Now you can think of it as a 32-way parallelism. Each bit is a single operation. So say previously we have 10 ops per second. Now you get 330 ops per second. You can do this bitwise operations. For example, XOR operations. So one register file, one operation becomes 32 operation. So there is a paper called XORmad, they very amazing implemented on the Raspberry Pi using this feature to do real-time detection, very cool stuff. Yeah. Yeah, so the trade-off is always so the power area and performance in general, all the hardware design have to take into account the performance, the power, and also the area. When machine learning comes, there's a fourth figure of merit which is the accuracy. What is the accuracy? And there is a fifth one which is programmability. So how general is your hardware? For example, if Google just want to use that for AI and deep learning, it's totally fine that we can have a fully very specialized architecture just for deep learning to support convolution, multi-layered perception, long-short-term memory, but GPUS, you also want to have support for those scientific computing or graphics, AR and VR. So that's a difference, first of all. And TPU basically is a ASIC, right? It's a very fixed function but you can still program it with those coarse instructions so people from Google roughly designed those coarse granularity instruction. For example, one instruction just load the matrix, store a matrix, do convolutions, do matrix multiplications. Those coarse-grain instructions and they have a software-managed memory, also called a scratchpad. It's different from cache where it determines where to evict something from the cache, but now, since you know the computation pattern, there's no need to do out-of-order execution, to do branch prediction, no such things. Everything is determined, so you can take the multi of it and maintain a fully software-managed scratchpad to reduce the data movement and remember, data movement is the key for reducing the memory footprint and energy consumption. So, yeah. Mobilia and Nobana architectures actually I'm not quite familiar, didn't prepare those slides, so, comment it a little bit later, no. Oh, yeah, of course. Those are always and can certainly be applied to low-power embedded devices. If you're interested, I can show you a... Whoops. Some examples of, oops. Where is that? Of my previous projects running deep neural nets. For example, on a drone, this is using a Nvidia TK1 mobile GPU to do real-time tracking and detection. This is me playing my nunchaku. Filmed by a drone to do the detection and tracking. And also, this FPGA doing the deep neural network. It's pretty small. This large, doing the face-alignment and detecting the eyes, the nose and the mouth, at a pretty high framerate. Consuming only three watts. This is a project I did at Facebook doing the deep neural nets on the mobile phone to do image classification, for example, it says it's a laptop, or you can feed it with an image and it says it's a selfie, has person and the face, et cetera. So there's lots of opportunity for those embedded or mobile-deployment of deep neural nets. No, there is a team doing that, but I cannot comment too much, probably. There is a team at Google doing that sort of stuff, yeah. Okay, thanks, everyone. If you have any questions, feel free to drop me a e-mail.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_4_Introduction_to_Neural_Networks.txt
[students murmuring] - Okay, so good afternoon everyone, let's get started. So hi, so for those of you who I haven't met yet, my name is Serena Yeung and I'm the third and final instructor for this class, and I'm also a PhD student in Fei-Fei's group. Okay, so today we're going to talk about backpropagation and neural networks, and so now we're really starting to get to some of the core material in this class. Before we begin, let's see, oh. So a few administrative details, so assignment one is due Thursday, April 20th, so a reminder, we shifted the date back by a little bit and it's going to be due 11:59 p.m. on Canvas. So you should start thinking about your projects, there are TA specialties listed on the Piazza website so if you have questions about a specific project topic you're thinking about, you can go and try and find the TAs that might be most relevant. And then also for Google Cloud, so all students are going to get $100 in credits to use for Google Cloud for their assignments and project, so you should be receiving an email for that this week, I think. A lot of you may have already, and then for those of you who haven't, they're going to come, should be by the end of this week. Okay so where we are, so far we've talked about how to define a classifier using a function f, parameterized by weights W, and this function f is going to take data x as input, and output a vector of scores for each of the classes that you want to classify. And so from here we can also define a loss function, so for example, the SVM loss function that we've talked about which basically quantifies how happy or unhappy we are with the scores that we've produced, right, and then we can use that to define a total loss term. So L here, which is a combination of this data term, combined with a regularization term that expresses how simple our model is, and we have a preference for simpler models, for better generalization. And so now we want to find the parameters W that correspond to our lowest loss, right? We want to minimize the loss function, and so to do that we want to find the gradient of L with respect to W. So last lecture we talked about how we can do this using optimization, and we're going to iteratively take steps in the direction of steepest descent, which is the negative of the gradient, in order to walk down this loss landscape and get to the point of lowest loss, right? And we saw how this gradient descent can basically take this trajectory, looking like this image on the right, getting to the bottom of your loss landscape. Oh! Okay, and so we also talked about different ways for computing a gradient, right? We can compute this numerically using finite difference approximation which is slow and approximate, but at the same time it's really easy to write out, you know you can always get the gradient this way. We also talked about how to use the analytic gradient and computing this is, it's fast and exact once you've gotten the expression for the analytic gradient, but at the same time you have to do all the math and the calculus to derive this, so it's also, you know, easy to make mistakes, right? So in practice what we want to do is we want to derive the analytic gradient and use this, but at the same time check our implementation using the numerical gradient to make sure that we've gotten all of our math right. So today we're going to talk about how to compute the analytic gradient for arbitrarily complex functions, using a framework that I'm going to call computational graphs. And so basically what a computational graph is, is that we can use this kind of graph in order to represent any function, where the nodes of the graph are steps of computation that we go through. So for example, in this example, the linear classifier that we've talked about, the inputs here are x and W, right, and then this multiplication node represents the matrix multiplier, the multiplication of the parameters W with our data x that we have, outputting our vector of scores. And then we have another computational node which represents our hinge loss, right, computing our data loss term, Li. And we also have this regularization term at the bottom right, so this node which computes our regularization term, and then our total loss here at the end, L, is the sum of the regularization term and the data term. And the advantage is that once we can express a function using a computational graph, then we can use a technique that we call backpropagation which is going to recursively use the chain rule in order to compute the gradient with respect to every variable in the computational graph, and so we're going to see how this is done. And this becomes very useful when we start working with really complex functions, so for example, convolutional neural networks that we're going to talk about later in this class. We have here the input image at the top, we have our loss at the bottom, and the input has to go through many layers of transformations in order to get all the way down to the loss function. And this can get even crazier with things like, the, you know, like a neural turing machine, which is another kind of deep learning model, and in this case you can see that the computational graph for this is really insane, and especially, we end up, you know, unrolling this over time. It's basically completely impractical if you want to compute the gradients for any of these intermediate variables. Okay, so how does backpropagation work? So we're going to start off with a simple example, where again, our goal is that we have a function. So in this case, f of x, y, z equals x plus y times z, and we want to find the gradients of the output of the function with respect to any of the variables. So the first step, always, is we want to take our function f, and we want to represent it using a computational graph. Right, so here our computational graph is on the right, and you can see that we have our, first we have the plus node, so x plus y, and then we have this multiplication node, right, for the second computation that we're doing. And then, now we're going to do a forward pass of this network, so given the values of the variables that we have, so here, x equals negative two, y equals five and z equals negative four, I'm going to fill these all in in our computational graph, and then here we can compute an intermediate value, so x plus y gives three, and then finally we pass it through again, through the last node, the multiplication, to get our final node of f equals negative 12. So here we want to give every intermediate variable a name. So here I've called this intermediate variable after the plus node q, and we have q equals x plus y, and then f equals q times z, using this intermediate node. And I've also written out here, the gradients of q with respect to x and y, which are just one because of the addition, and then the gradients of f with respect to q and z, which is z and q respectively because of the multiplication rule. And so what we want to find, is we want to find the gradients of f with respect to x, y and z. So what backprop is, it's a recursive application of the chain rule, so we're going to start at the back, the very end of the computational graph, and then we're going to work our way backwards and compute all the gradients along the way. So here if we start at the very end, right, we want to compute the gradient of the output with respect to the last variable, which is just f. And so this gradient is just one, it's trivial. So now, moving backwards, we want the gradient with respect to z, right, and we know that df over dz is equal to q. So the value of q is just three, and so we have here, df over dz equals three. And so next if we want to do df over dq, what is the value of that? What is df over dq? So we have here, df over dq is equal to z, right, and the value of z is negative four. So here we have df over dq is equal to negative four. Okay, so now continuing to move backwards to the graph, we want to find df over dy, right, but here in this case, the gradient with respect to y, y is not connected directly to f, right? It's connected through an intermediate node of z, and so the way we're going to do this is we can leverage the chain rule which says that df over dy can be written as df over dq, times dq over dy, and so the intuition of this is that in order to get to find the effect of y on f, this is actually equivalent to if we take the effect of q times q on f, which we already know, right? df over dq is equal to negative four, and we compound it with the effect of y on q, dq over dy. So what's dq over dy equal to in this case? - [Student] One. - One, right. Exactly. So dq over dy is equal to one, which means, you know, if we change y by a little bit, q is going to change by approximately the same amount right, this is the effect, and so what this is doing is this is saying, well if I change y by a little bit, the effect of y on q is going to be one, and then the effect of q on f is going to be approximately a factor of negative four, right? So then we multiply these together and we get that the effect of y on f is going to be negative four. Okay, so now if we want to do the same thing for the gradient with respect to x, right, we can do the, we can follow the same procedure, and so what is this going to be? [students speaking away from microphone] - I heard the same. Yeah exactly, so in this case we want to, again, apply the chain rule, right? We know the effect of q on f is negative four, and here again, since we have also the same addition node, dq over dx is equal to one, again, we have negative four times one, right, and the gradient with respect to x is going to be negative four. Okay, so what we're doing is, in backprop, is we basically have all of these nodes in our computational graph, but each node is only aware of its immediate surroundings, right? So we have, at each node, we have the local inputs that are connected to this node, the values that are flowing into the node, and then we also have the output that is directly outputted from this node. So here our local inputs are x and y, and the output is z. And at this node we also know the local gradient, right, we can compute the gradient of z with respect to x, and the gradient of z with respect to y, and these are usually really simple operations, right? Each node is going to be something like the addition or the multiplication that we had in that earlier example, which is something where we can just write down the gradient, and we don't have to, you know, go through very complex calculus in order to find this. - [Student] Can you go back and explain why more in the last slide was different than planning the first part of it using just normal calculus? - Yeah, so basically if we go back, hold on, let me... So if we go back here, we could exactly write out, find all of these using just calculus, so we could say, you know, we want df over dx, right, and we can probably expand out this expression and see that it's just going to be z, but we can do this for, in this case, because it's simple, but we'll see examples later on where once this becomes a really complicated expression, you don't want to have to use calculus to derive, right, the gradient for something, for a super-complicated expression, and instead, if you use this formalism and you break it down into these computational nodes, then you can only ever work with gradients of very simple computations, right, at the level of, you know, additions, multiplications, exponentials, things as simple as you want them, and then you just use the chain rule to multiply all these together, and get your, the value of your gradient without having to ever derive the entire expression. Does that make sense? [student murmuring] Okay, so we'll see an example of this later. And so, was there another question, yeah? [student speaking away from microphone] - [Student] What's the negative four next to the z representing? - Negative, okay yeah, so the negative four, these were the, the green values on top were all the values of the function as we passed it forward through the computational graph, right? So we said up here that x is equal to negative two, y is equal to five, and z equals negative four, so we filled in all of these values, and then we just wanted to compute the value of this function. Right, so we said this value of q is going to be x plus y, it's going to be negative two plus five, it is going to be three, and we have z is equal to negative four so we fill that in here, and then we multiplied q and z together, negative four times three in order to get the final value of f, right? And then the red values underneath were as we were filling in the gradients as we were working backwards. Okay. Okay, so right, so we said that, you know, we have these local, these nodes, and each node basically gets its local inputs coming in and the output that it sees directly passing on to the next node, and we also have these local gradients that we computed, right, the gradient of the immediate output of the node with respect to the inputs coming in. And so what happens during backprop is we have these, we'll start from the back of the graph, right, and then we work our way from the end all the way back to the beginning, and when we reach each node, at each node we have the upstream gradients coming back, right, with respect to the immediate output of the node. So by the time we reach this node in backprop, we've already computed the gradient of our final loss l, with respect to z, right? And so now what we want to find next is we want to find the gradients with respect to just before the node, to the values of x and y. And so as we saw earlier, we do this using the chain rule, right, we have from the chain rule, that the gradient of this loss function with respect to x is going to be the gradient with respect to z times, compounded by this gradient, local gradient of z with respect to x. Right, so in the chain rule we always take this upstream gradient coming down, and we multiply it by the local gradient in order to get the gradient with respect to the input. - [Student] So, sorry, is it, it's different because this would never work to get a general formula into the, or general symbolic formula for the gradient. It only works with instantaneous values, where you like. [student coughing] Or passing a little constant value as a symbolic. - So the question is whether this only works because we're working with the current values of the function, and so it works, right, given the current values of the function that we plug in, but we can write an expression for this, still in terms of the variables, right? So we'll see that gradient of L with respect to z is going to be some expression, and gradient of z with respect to x is going to be another expression, right? But we plug in these, we plug in the values of these numbers at the time in order to get the value of the gradient with respect to x. So what you could do is you could recursively plug in all of these expressions, right? Gradient with respect, z with respect to x is going to be a simple, simple expression, right? So in this case, if we have a multiplication node, gradient of z with respect to x is just going to be y, right, we know that, but the gradient of L with respect to z, this is probably a complex part of the graph in itself, right, so here's where we want to just, in this case, have this numerical, right? So as you said, basically this is going to be just a number coming down, right, a value, and then we just multiply it with the expression that we have for the local gradient. And I think this will be more clear when we go through a more complicated example in a few slides. Okay, so now the gradient of L with respect to y, we have exactly the same idea, where again, we use the chain rule, we have gradient of L with respect to z, times the gradient of z with respect to y, right, we use the chain rule, multiply these together and get our gradient. And then once we have these, we'll pass these on to the node directly before, or connected to this node. And so the main thing to take away from this is that at each node we just want to have our local gradient that we compute, just keep track of this, and then during backprop as we're receiving, you know, numerical values of gradients coming from upstream, we just take what that is, multiply it by the local gradient, and then this is what we then send back to the connected nodes, the next nodes going backwards, without having to care about anything else besides these immediate surroundings. So now we're going to go through another example, this time a little bit more complex, so we can see more why backprop is so useful. So in this case, our function is f of w and x, which is equal to one over one plus e to the negative of w-zero times x-zero plus w-one x-one, plus w-two, right? So again, the first step always is we want to write this out as a computational graph. So in this case we can see that in this graph, right, first we multiply together the w and x terms that we have, w-zero with x-zero, w-one with x-one, and w-two, then we add all of these together, right? Then we do, scale it by negative one, we take the exponential, we add one, and then finally we do one over this whole term. And then here I've also filled in values of these, so let's say given values that we have for the ws and xs, right, we can make a forward pass and basically compute what the value is at every stage of the computation. And here I've also written down here at the bottom the values, the expressions for some derivatives that are going to be helpful later on, so same as we did before with the simple example. Okay, so now then we're going to do backprop through here, right, so again, we're going to start at the very end of the graph, and so here again the gradient of the output with respect to the last variable is just one, it's just trivial, and so now moving backwards one step, right? So what's the gradient with respect to the input just before one over x? Well, so in this case, we know that the upstream gradient that we have coming down, right, is this red one, right? This is the upstream gradient that we have flowing down, and then now we need to find the local gradient, right, and the local gradient of this node, this node is one over x, right, so we have f of x equals one over x here in red, and the local gradient of this df over dx is equal to negative one over x-squared, right? So here we're going to take negative one over x-squared, and plug in the value of x that we had during this forward pass, 1.37, and so our final gradient with respect to this variable is going to be negative one over 1.37 squared times one equals negative 0.53. So moving back to the next node, we're going to go through the exact same process, right? So here, the gradient flowing from upstream is going to be negative 0.53, right, and here the local gradient, the node here is a plus one, and so now looking at our reference of derivatives at the bottom, we have that for a constant plus x, the local gradient is just one, right? So what's the gradient with respect to this variable using the chain rule? So it's going to be the upstream gradient of negative 0.53 times our local gradient of one, which is equal to negative 0.53. So let's keep moving backwards one more step. So here we have the exponential, right? So what's the upstream gradient coming down? [student speaking away from microphone] Right, so the upstream gradient is negative 0.53, what's the local gradient here? It's going to be the local gradient of e to the x, right? This is an exponential node, and so our chain rule is going to tell us that our gradient is going to be negative 0.53 times e to the power of x, which in this case is negative one, from our forward pass, and this is going to give us our final gradient of negative 0.2. Okay, so now one more node here, the next node is, that we reach, is going to be a multiplication with negative one, right? So here, what's the upstream gradient coming down? - [Student] Negative 0.2? - [Serena] Negative 0.2, right, and what's going to be the local gradient, can look at the reference sheet. It's going to be, what was it? I think I heard it. - [Student] That's minus one? - It's going to be minus one, exactly, yeah, because our local gradient says it's going to be, df over dx is a, right, and the value of a that we scaled x by is negative one here. So we have here that the gradient is negative one times negative 0.2, and so our gradient is 0.2. Okay, so now we've reached an addition node, and so in this case we have these two branches both connected to it, right? So what's the upstream gradient here? It's going to be 0.2, right, just as everything else, and here now the gradient with respect to each of these branches, it's an addition, right, and we saw from before in our simple example that when we have an addition node, the gradient with respect to each of the inputs to the addition is just going to be one, right? So here, our local gradient for looking at our top stream is going to be one times the upstream gradient of 0.2, which is going to give a total gradient of 0.2, right? And then we, for our bottom branch we'd do the same thing, right, our upstream gradient is 0.2, our local gradient is one again, and the total gradient is 0.2. So is everything clear about this? Okay. So we have a few more gradients to fill out, so moving back now we've reached w-zero and x-zero, and so here we have a multiplication node, right, so we saw the multiplication node from before, it just, the gradient with respect to one of the inputs just is the value of the other input. And so in this case, what's the gradient with respect to w-zero? - [Student] Minus 0.2. - Minus, I'm hearing minus 0.2, exactly. Yeah, so with respect to w-zero, we have our upstream gradient, 0.2, right, times our, this is the bottom one, times our value of x, which is negative one, we get negative 0.2 and we can do the same thing for our gradient with respect to x-zero. It's going to be 0.2 times the value of w-zero which is two, and we get 0.4. Okay, so here we've filled out most of these gradients, and so there was the question earlier about why this is simpler than just computing, deriving the analytic gradient, the expression with respect to any of these variables, right? And so you can see here, all we ever dealt with was expressions for local gradients that we had to write out, so once we had these expressions for local gradients, all we did was plug in the values for each of these that we have, and use the chain rule to numerically multiply this all the way backwards and get the gradients with respect to all of the variables. And so, you know, we can also fill out the gradients with respect to w-one and x-one here in exactly the same way, and so one thing that I want to note is that right when we're creating these computational graphs, we can define the computational nodes at any granularity that we want to. So in this case, we broke it down into the absolute simplest that we could, right, we broke it down into additions and multiplications, you know, it basically can't get any simpler than that, but in practice, right, we can group some of these nodes together into more complex nodes if we want. As long as we're able to write down the local gradient for that node, right? And so as an example, if we look at a sigmoid function, so I've defined the sigmoid function in the upper-right here, of a sigmoid of x is equal to one over one plus e to the negative x, and this is something that's a really common function that you'll see a lot in the rest of this class, and we can compute the gradient for this, we can write it out, and if we do actually go through the math of doing this analytically, we can get a nice expression at the end. So in this case it's equal to one minus sigma of x, so the output of this function times sigma of x, right? And so in cases where we have something like this, we could just take all the computations that we had in our graph that made up this sigmoid, and we could just replace it with one big node that's a sigmoid, right, because we do know the local gradient for this gate, it's this expression, d of the sigmoid of x over dx, right? So basically the important thing here is that you can, group any nodes that you want to make any sorts of a little bit more complex nodes, as long as you can write down the local gradient for this. And so all this is is basically a trade-off between, you know, how much math that you want to do in order to get a more, kind of concise and simpler graph, right, versus how simple you want each of your gradients to be, right? And then you can write out as complex of a computational graph that you want. Yeah, question? - [Student] This is a question on the graph itself, is there a reason that the first two multiplication nodes and the weights are not connected to a single addition node? - So they could also be connected into a single addition node, so the question was, is there a reason why w-zero and x-zero are not connected with w-two? All of these additions just connected together, and yeah, so the reason, the answer is that you can do that if you want, and in practice, maybe you would actually want to do that because this is still a very simple node, right? So in this case I just wrote this out into as simple as possible, where each node only had up to two inputs, but yeah, you could definitely do that. Any other questions about this? Okay, so the one thing that I really like about thinking about this like a computational graph is that I feel very comforted, right, like anytime I have to take a gradient, find gradients of something, even if the expression that I want to compute gradients of is really hairy, and really scary, you know, whether it's something like this sigmoid or something worse, I know that, you know, I could derive this if I want to, but really, if I just sit down and write it out in terms of a computational graph, I can go as simple as I need to to always be able to apply backprop and the chain rule, and be able to compute all the gradients that I need. And so this is something that you guys should think about when you're doing your homeworks, as basically, you know, anytime you're having trouble finding gradients of something just think about it as a computational graph, break it down into all of these parts, and then use the chain rule. Okay, and so, you know, so we talked about how we could group these set of nodes together into a sigmoid gate, and just to confirm, like, that this is actually exactly equivalent, we can plug this in, right? So we have that our input here to the sigmoid gate is going to be one, in green, and then we have that the output is going to be here, 0.73, right, and this'll work out if you plug it in to the sigmoid function. And so now if we want to do, if we want to take the gradient, and we want to treat this entire sigmoid as one node, now what we should do is we need to use this local gradient that we've derived up here, right? One minus sigmoid of x times the sigmoid of x. So if we plug this in, and here we know that the value of sigmoid of x was 0.73, so if we plug this value in we'll see that this, the value of this gradient is equal to 0.2, right, and so the value of this local gradient is 0.2, we multiply it by the x upstream gradient which is one, and we're going to get out exactly the same value of the gradient with respect to before the sigmoid gate, as if we broke it down into all of the smaller computations. Okay, and so as we're looking at what's happening, right, as we're taking these gradients going backwards through our computational graph, there's some patterns that you'll notice where there's some intuitive interpretation that we can give these, right? So we saw that the add gate is a gradient distributor right, when we passed through this addition gate here, which had two branches coming out of it, it took the gradient, the upstream gradient and it just distributed it, passed the exact same thing to both of the branches that were connected. So here's a couple more that we can think about. So what's a max gate look like? So we have a max gate here at the bottom, right, where the input's coming in are z and w, z has a value of two, w has a value of negative one, and then we took the max of this, which is two, right, and so we pass this down into the remainder of our computational graph. So now if we're taking the gradients with respect to this, the upstream gradient is, let's say two coming back, right, and what does this local gradient look like? So anyone, yes? - [Student] It'll be zero for one, and one for the other? - Right. [student speaking away from microphone] Exactly, so the answer that was given is that z will have a gradient of two, w will have a value, a gradient of zero, and so one of these is going to get the full value of the gradient just passed back, and routed to that variable, and then the other one will have a gradient of zero, and so, so we can think of this as kind of a gradient router, right, so, whereas the addition node passed back the same gradient to both branches coming in, the max gate will just take the gradient and route it to one of the branches, and this makes sense because if we look at our forward pass, what's happening is that only the value that was the maximum got passed down to the rest of the computational graph, right? So it's the only value that actually affected our function computation at the end, and so it makes sense that when we're passing our gradients back, we just want to adjust what, you know, flow it through that branch of the computation. Okay, and so another one, what's a multiplication gate, which we saw earlier, is there any interpretation of this? [student speaking away from microphone] Okay, so the answer that was given is that the local gradient is basically just the value of the other variable. Yeah, so that's exactly right. So we can think of this as a gradient switcher, right? A switcher, and I guess a scaler, where we take the upstream gradient and we scale it by the value of the other branch. Okay, and so one other thing to note is that when we have a place where one node is connected to multiple nodes, the gradients add up at this node, right? So at these branches, using the multivariate chain rule, we're just going to take the value of the upstream gradient coming back from each of these nodes, and we'll add these together to get the total upstream gradient that's flowing back into this node, and you can see this from the multivariate chain rule and also thinking about this, you can think about this that if you're going to change this node a little bit, it's going to affect both of these connected nodes in the forward pass, right, when you're making your forward pass through the graph. And so then when you're doing backprop, right, then now the, both of these gradients coming back are going to affect this node, right, and so that's how we're going to sum these up to be the total upstream gradient flowing back into this node. Okay, so any questions about backprop, going through these forward and backward passes? - [Student] So we haven't did anything to actually update the weights. [speaking away from microphone] - Right, so the question is, we haven't done anything yet to update the values of these weights, we've only found the gradients with respect to the variables, that's exactly right. So what we've talked about so far in this lecture is how to compute gradients with respect to any variables in our function, right, and then once we have these we can just apply everything we learned in the optimization lecture, last lecture, right? So given the gradient, we now take a step in the direction of the gradient in order to update our weight, our parameters, right? So you can just take this entire framework that we learned about last lecture for optimization, and what we've done here is just learn how to compute the gradients we need for arbitrarily complex functions, right, and so this is going to be useful when we talk about complex functions like neural networks later on. Yeah? - [Student] Do you mind writing out the, all the variate, so you could help explain this slide a little better? - Yeah, so I can write this maybe on the board. Right, so basically if we're going to have, let's see, if we're going to have the gradient of f with respect to some variable x, right, and let's say it's connected through variables, let's see, i, we can basically... Right, so this is basically saying that if x is connected to these multiple elements, right, which in this case, different q-is, then the chain rule is taking all, it's going to take the effect of each of these intermediate variables, right, on our final output f, and then compound each one with the local effect of our variable x on that intermediate value, right? So yeah, it's basically just summing all these up together. Okay, so now that we've, you know, done all these examples in the scalar case, we're going to look at what happens when we have vectors, right? So now if our variables x, y and z, instead of just being numbers, we have vectors for these. And so everything stays exactly the same, the entire flow, the only difference is that now our gradients are going to be Jacobian matrices, right, so these are now going to be matrices containing the derivative of each element of, for example z with respect to each element of x. Okay, and so to, you know, so give an example of something where this is happening, right, let's say that we have our input is going to now be a vector, so let's say we have a 4096-dimensional input vector, and this is kind of a common size that you might see in convolutional neural networks later on, and our node is going to be an element-wise maximum, right? So we have f of x is equal to the maximum of x compared with zero element-wise, and then our output is going to be also a 4096-dimensional vector. Okay, so in this case, what's the size of our Jacobian matrix? Remember I said earlier, the Jacobian matrix is going to be, like each row is, it's going to be partial derivatives, a matrix of partial derivatives of each dimension of the output with respect to each dimension of the input. Okay, so the answer I heard was 4,096 squared, and that's, yeah, that's correct. So this is pretty large, right, 4,096 by 4,096 and in practice this is going to be even larger because we're going to work with many batches of, you know, of, for example, 100 inputs at the same time, right, and we'll put all of these through our node at the same time to be more efficient, and so this is going to scale this by 100, and in practice our Jacobian's actually going to turn out to be something like 409,000 by 409,000 right, so this is really huge, and basically completely impractical to work with. So in practice though, we don't actually need to compute this huge Jacobian most of the time, and so why is that, like, what does this Jacobian matrix look like? If we think about what's happening here, where we're taking this element-wise maximum, and we think about what are each of the partial derivatives, right, which dimension of the inputs affect which dimensions of the output? What sort of structure can we see in our Jacobian matrix? [student speaking away from microphone] Okay, so I heard that it's diagonal, right, exactly. So because this is element-wise, right, each element of the input, say the first dimension, only affects that corresponding element in the output, right? And so because of that our Jacobian matrix, which is just going to be a diagonal matrix. And so in practice then, we don't actually have to write out and formulate this entire Jacobian, we can just know the effect of x on the output, right, and then we can just use these values, right, and fill it in as we're computing the gradient. Okay, so now we're going to go through a more concrete vectorized example of a computational graph. Right, so let's look at a case where we have the function f of x and W is equal to, basically the L-two of W multiplied by x, and so in this case we're going to say x is n-dimensional and W is n by n. Right, so again our first step, writing out the computational graph, right? We have W multiplied by x, and then followed by, I'm just going to call this L-two. And so now let's also fill out some values for this, so we can see that, you know, let's say have W be this two by two matrix, and x is going to be this two-dimensional vector, right? And so we can say, label again our intermediate nodes. So our intermediate node after the multiplication it's going to be q, we have q equals W times x, which we can write out element-wise this way, where the first element is just W-one-one times x-one plus W-one-two times x-two and so on, and then we can now express f in relation to q, right? So looking at the second node we have f of q is equal to the L-two norm of q, which is equal to q-one squared plus q-two squared. Okay, so we filled this in, right, we get q and then we get our final output. Okay, so now let's do backprop through this, right? So again, this is always the first step, we have the gradient with respect to our output is just one. Okay, so now let's move back one node, so now we want to find the gradient with respect to q, right, our intermediate variable before the L-two. And so q is a two-dimensional vector, and what we want to do is we want to find how each element of q affects our final value of f, right, and so if we look at this expression that we've written out for f here at the bottom, we can see that the gradient of f with respect to a specific q-i, let's say q-one, is just going to be two times q-i, right? This is just taking this derivative here, and so we have this expression for, with respect to each element of q-i, we could also, you know, write this out in vector form if we want to, it's just going to be two times our vector of q, right, if we want to write this out in vector form, and so what we get is that our gradient is 0.44, and 0.52, this vector, right? And so you can see that it just took q and it scaled it by two, right? Each element is just multiplied by two. So the gradient of a vector is always going to be the same size as the original vector, and each element of this gradient is going to, it means how much of this particular element affects our final output of the function. Okay, so now let's move one step backwards, right, what's the gradient with respect to W? And so here again we want to use the same concept of trying to apply the chain rule, right, so we want to compute our local gradient of q with respect to W, and so let's look at this again element-wise, and if we do that, let's see what's the effect of each q, right, each element of q with respect to each element of W, and so this is going to be the Jacobian that we talked about earlier, and if we look at this in this multiplication, q is equal to W times x, right, what's the derivative, or the gradient of the first element of q, so our first element up top, with respect to W-one-one? So q-one with respect to W-one-one? What's that value? X-one, exactly. Yeah, so we know that this is x-one, and we can write this out more generally of the gradient of q-k with respect to W-i,j is equal to X-j. And then now if we want to find the gradient with respect to, of f, with respect to each W-i,j. So looking at these derivatives now, we can use this chain rule that we talked earlier where we basically compound df over dq-k for each element of q with dq-k over W-i,j for each element of W-i,j, right? So we find the effect of each element of W on each element of q, and sum this across all q. And so if you write this out, this is going to give this expression of two times q-i times x-j. Okay, and so filling this out then we get this gradient with respect to W, and so again we can compute this each element-wise, or we can also look at this expression that we've derived and write it out in vectorized form, right? So okay, and remember, the important thing is always to check the gradient with respect to a variable should have the same shape as the variable, and something, so this is something really useful in practice to sanity check, right, like once you've computed what your gradient should be, check that this is the same shape as your variable, because again, the element, each element of your gradient is quantifying how much that element is contributing to your, is affecting your final output. Yeah? [student speaking away from microphone] The both sides, oh the both sides one is an indicator function, so this is saying that it's just one if k equals i. Okay, so let's see, so we've done that, and so now just see, one more example. Now our last thing we need to find is the gradient with respect to q-I. So here if we compute the partial derivatives we can see that dq-k over dx-i is equal to W-k,i, right, using the same way as we did it for W, and then again we can just use the chain rule and get the total expression for that, right? And so this is going to be the gradient with respect to x, again, of the same shape as x, and we can also write this out in vectorized form if we want. Okay, so any questions about this, yeah? [student speaking away from microphone] So we are computing the Jacobian, so let me go back here, right, so if we're doing, so right, so we have these partial derivatives of q-k with respect to x-i, right, and these are forming your, the entries of your Jacobian, right? And so in practice what we're going to do is we basically take that, and you're going to see it up there in the chain rule, so the vectorized expression of gradient with respect to x, right, this is going to have the Jacobian here which is this transposed value here, so you can write it out in vectorized form. [student speaking away from microphone] So well, so in this case the matrix is going to be the same size as W right, so it's not actually a large matrix in this case, right? Okay, so the way that we've been thinking about this is like a really modularized implementation, right, where in our computational graph, right, we look at each node locally and we compute the local gradients and chain them with upstream gradients coming down, and so you can think of this as basically a forward and a backwards API, right? In the forward pass we implement the, you know, a function computing the output of this node, and then in the backwards pass we compute the gradient. And so when we actually implement this in code, we're going to do this in exactly the same way. So we can basically think about, for each gate, right, if we implement a forward function and a backward function, where the backward function is computing the chain rule, then if we have our entire graph, we can just make a forward pass through the entire graph by iterating through all the nodes in the graph, all the gates. Here I'm going to use the word gate and node, kind of interchangeably, we can iterate through all of these gates and just call forward on each of the gates, right? And we just want to do this in topologically sorted order, so we process all of the inputs coming in to a node before we process that node. And then going backwards, we're just going to then go through all of the gates in this reverse sorted order, and then call backwards on each of these gates. Okay, and so if we look at then the implementation for our particular gates, so for example, this MultiplyGate here, we want to implement the forward pass, right, so it gets x and y as inputs, and returns the value of z, and then when we go backwards, right, we get as input dz, which is our upstream gradient, and we want to output the gradients on the input's x and y to pass down, right? So we're going to output dx and dy, and so in this case, in this example, everything is back to the scalar case here, and so if we look at this in the forward pass, one thing that's important is that we need to, we should cache the values of the forward pass, right, because we end up using this in the backward pass a lot of the time. So here in the forward pass, we want to cache the values of x and y, right, and in the backward pass, using the chain rule, we're going to, remember, take the value of the upstream gradient and scale it by the value of the other branch, right, and so we'll keep, for dx we'll take our value of self.y that we kept, and multiply it by dz coming down, and same for dy. Okay, so if you look at a lot of deep-learning frameworks and libraries you'll see that they exactly follow this kind of modularization, right? So for example, Caffe is a popular deep learning framework, and you'll see, if you go look through the Caffe source code you'll get to some directory that says layers, and in layers, which are basically computational nodes, usually layers might be slightly more, you know, some of these more complex computational nodes like the sigmoid that we talked about earlier, you'll see, basically just a whole list of all different kinds of computational nodes, right? So you might have the sigmoid, and I know there might be here, there's like a convolution is one, there's an Argmax is another layer, you'll have all of these layers and if you dig in to each of them, they're just exactly implementing a forward pass and a backward pass, and then all of these are called when we do forward and backward pass through the entire network that we formed, and so our network is just basically going to be stacking up all of these, the different layers that we choose to use in the network. So for example, if we look at a specific one, in this case a sigmoid layer, you'll see that in the sigmoid layer, right, we've talked about the sigmoid function, you'll see that there's a forward pass which basically computes exactly the sigmoid expression, and then a backward pass, right, where it is taking as input something, basically a top_diff, which is our upstream gradient in this case, and multiplying it by a local gradient that we compute. So in assignment one you'll get practice with this kind of, this computational graph way of thinking where, you know, you're going to be writing your SVM and Softmax classes, and taking the gradients of these. And so again, remember always you want to first step, represent it as a computational graph, right? Figure out what are all the computations that you did leading up to the output, and then when you, when it's time to do your backward pass, just take the gradient with respect to each of these intermediate variables that you've defined in your computational graph, and use the chain rule to link them all together. Okay, so summary of what we've talked about so far. When we get down to, you know, working with neural networks, these are going to be really large and complex, so it's going to be impractical to write down the gradient formula by hand for all your parameters. So in order to get these gradients, right, we talked about how, what we should use is backpropagation, right, and this is kind of one of the core techniques of, you know, neural networks, is basically using backpropagation to get your gradients, right? And so this is a recursive application of the chain rule where we have this computational graph, and we start at the back and we go backwards through it to compute the gradients with respect to all of the intermediate variables, which are your inputs, your parameters, and everything else in the middle. And we've also talked about how really this implementation and this graph structure, each of these nodes is really, you can see this as implementing a forward and backwards API, right? And so in the forward pass we want to compute the results of the operation, and we want to save any intermediate values that we might want to use later in our gradient computation, and then in the backwards pass we apply this chain rule and we take this upstream gradient, we chain it, multiply it with our local gradient to compute the gradient with respect to the inputs of the node, and we pass this down to the nodes that are connected next. Okay, so now finally we're going to talk about neural networks. All right, so really, you know, neural networks, people draw a lot of analogies between neural networks and the brain, and different types of biological inspirations, and we'll get to that in a little bit, but first let's talk about it, you know, just looking at it as a function, as a class of functions without all of the brain stuff. So, so far we've talked about, you know, we've worked a lot with this linear score function, right? f equals W times x, and so we've been using this as a running example of a function that we want to optimize. So instead of using the single in your transformation, if we want a neural network where we can just, as the simplest form, just stack two of these together, right? Just a linear transformation on top of another one in order to get a two-layer neural network, right? And so what this looks like is first we have our, you know, a matrix multiply of W-one with x, and then we get this intermediate variable and we have this non-linear function of a max of zero with W, max with this output of this linear layer, and it's really important to have these non-linearities in place, which we'll talk about more later, because otherwise if you just stack linear layers on top of each other, they're just going to collapse to, like a single linear function. Okay, so we have our first linear layer and then we have this non-linearity, right, and then on top of this we'll add another linear layer. And then from here, finally we can get our score function, our output vector of scores. So basically, like, more broadly speaking, neural networks are a class of functions where we have simpler functions, right, that are stacked on top of each other, and we stack them in a hierarchical way in order to make up a more complex non-linear function, and so this is the idea of having, basically multiple stages of hierarchical computation, right? And so, you know, so this is kind of the main way that we do this is by taking something like this matrix multiply, this linear layer, and we just stack multiple of these on top of each other with non-linear functions in-between, right? And so one thing that this can help solve is if we look, if we remember back to this linear score function that we were talking about, right, remember we discussed earlier how each row of our weight matrix W was something like a template. It was a template that sort of expressed, you know, what we're looking for in the input for a specific class, right, so for example, you know, the car template looks something like this kind of fuzzy red car, and we were looking for this in the input to compute the score for the car class. And we talked about one of the problems with this is that there's only one template, right? There's this red car, whereas in practice, we actually have multiple modes, right? We might want, we're looking for, you know, a red car, there's also a yellow car, like all of these are different kinds of cars, and so what this kind of multiple layer network lets you do is now, you know, each of this intermediate variable h, right, W-one can still be these kinds of templates, but now you have all of these scores for these templates in h, and we can have another layer on top that's combining these together, right? So we can say that actually my car class should be, you know, connected to, we're looking for both red cars as well as yellow cars, right, because we have this matrix W-two which is now a weighting of all of our vector in h. Okay, any questions about this? Yeah? [student speaking away from microphone] Yeah, so there's a lot of ways, so there's a lot of different non-linear functions that you can choose from, and we'll talk later on in a later lecture about all the different kinds of non-linearities that you might want to use. - [Student] For the pictures in the slide, so, on the bottom row you have images of your vector W-one weight, and so maybe you would have images of another vector W-two? - So W-one, because it's directly connected to the input x, this is what's like, really interpretable, because you can formulate all of these templates. W-two, so h is going to be a score of how much of each template you solve, for example, all right, so it might be like you have a, you know, like a, I don't know, two for the red car, and like, one for the yellow car or something like that. - [Student] Oh, okay, so instead of W-one being just 10, like, you would have a left-facing horse and a right-facing horse, and they'd both be included-- - Exactly, so the question is basically whether in W-one you could have both left-facing horse and right-facing horse, right, and so yeah, exactly. So now W-one can be many different kinds of templates right? They're not, and then W-two, now we can, like basically it's a weighted sum of all of these templates. So now it allows you to weight together multiple templates in order to get the final score for a particular class. - [Student] So if you're processing an image then it's actually left-facing horse. It'll get a really high score with the left-facing horse template, and a lower score with the right-facing horse template, and then this will take the maximum of the two? - Right, so okay, so the question is, if our image x is like a left-facing horse and in W-one we have a template of a left-facing horse and a right-facing horse, then what's happening, right? So what happens is yeah, so in h you might have a really high score for your left-facing horse, kind of a lower score for your right-facing horse, and W-two is, it's a weighted sum, so it's not a maximum. It's a weighted sum of these templates, but if you have either a really high score for one of these templates, or let's say you have, kind of a lower and medium score for both of these templates, all of these kinds of combinations are going to give high scores, right? And so in the end what you're going to get is something that generally scores high when you have a horse of any kind. So let's say you had a front-facing horse, you might have medium values for both the left and the right templates. Yeah, question? - [Student] So is W-two doing the weighting, or is h doing the weighting? - W-two is doing the weighting, so the question is, "Is W-two doing the weighting or is h doing the weighting?" h is the value, like in this example, h is the value of scores for each of your templates that you have in W-one, right? So h is like the score function, right, it's how much of each template in W-one is present, and then W-two is going to weight all of these, weight all of these intermediate scores to get your final score for the class. - [Student] And which is the non-linear thing? - So the question is, "which is the non-linear thing?" So the non-linearity usually happens right before h, so h is the value right after the non-linearity. So we're talking about this, like, you know, intuitively as this example of like, W-one is looking for, you know, has these same templates as before, and W-two is a weighting for these. In practice it's not exactly like this, right, because as you said, there's all these non-linearities thrown in and so on, but it has this approximate type of interpretation to it. - [Student] So h is just W-one-x then? - Yeah, yeah, so the question is h just W-one-x? So h is just W-one times x, with the max function on top. Oh, let me just, okay so, so we've talked about this as an example of a two-layer neural network, and we can stack more layers of these to get deeper networks of arbitrary depth, right? So we can just do this one more time at another non-linearity and matrix multiply now by W-three, and now we have a three-layer neural network, right? And so this is where the term deep neural networks is basically coming from, right? This idea that you can stack multiple of these layers, you know, for very deep networks. And so in homework you'll get a practice of writing and you know, training one of these neural networks, I think in assignment two, but basically a full implementation of this using this idea of forward pass, right, and backward passes, and using chain rule to compute gradients that we've already seen. The entire implementation of a two-layer neural network is actually really simple, it can just be done in 20 lines, and so you'll get some practice with this in assignment two, writing out all of these parts. And okay, so now that we've sort of seen what neural networks are as a function, right, like, you know, we hear people talking a lot about how there's biological inspirations for neural networks, and so even though it's important that to emphasize that these analogies are really loose, it's really just very loose ties, but it's still interesting to understand where some of these connections and inspirations come from. And so now I'm going to talk briefly about that. So if we think about a neuron, in kind of a very simple way, this neuron is, here's a diagram of a neuron. We have the impulses that are carried towards each neuron, right, so we have a lot of neurons connected together and each neuron has dendrites, right, and these are sort of, these are what receives the impulses that come into the neuron. And then we have a cell body, right, that basically integrates these signals coming in and then there's a kind of, then it takes this, and after integrating all these signals, it passes on, you know, the impulse carries away from the cell body to downstream neurons that it's connected to, right, and it carries this away through axons. So now if we look at what we've been doing so far, right, with each computational node, you can see that this actually has, you can see it in kind of a similar way, right? Where nodes are connected to each other in the computational graph, and we have inputs, or signals, x, x right, coming into a neuron, and then all of these x, right, x-zero, x-one, x-two, these are combined and integrated together, right, using, for example our weights, W. So we do some sort of computation, right, and in some of the computations we've been doing so far, something like W times x plus b, right, integrating all these together, and then we have an activation function that we apply on top, we get this value of this output, and we pass it down to the connecting neurons. So if you look at that this, this is actually, you can think about this in a very similar way, right? Like, you know, these are what's the signals coming in are kind of the, connected at synapses, right? The synapse connecting the multiple neurons, the dendrites are integrating all of these, they're integrating all of this information together in the cell body, and then we have the output carried on the output later on. And so this is kind of the analogy that you can draw between them, and if you look at these activation functions, right? This is what basically takes all the inputs coming in and outputs one number that's going out later on, and we've talked about examples like sigmoid activation function, right, and different kinds of non-linearities, and so sort of one kind of loose analogy that you can draw is that these non-linearities can represent something sort of like the firing, or spiking rate of the neurons, right? Where our neurons transmit signals to connecting neurons using kind of these discrete spikes, right? And so we can think of, you know, if they're spiking very fast then there's kind of a strong signal that's passed later on, and so we can think of this value after our activation function as sort of, in a sense, sort of this firing rate that we're going to pass on. And you know in practice, I think neuroscientists who are actually studying this say that kind of one of the non-linearities that are most similar to the way that neurons are actually behaving is a ReLU function, which is a ReLU non-linearity, which is something that we're going to look at more later on, but it's a function that's at zero for all negative values of input, and then it's a linear function for everything that's in kind of a positive regime. And so, you know, we'll talk more about this activation function later on, but that's kind of, in practice, maybe the one that's most similar to how neurons are actually behaving. But it's really important to be extremely careful with making any of these sorts of brain analogies, because in practice biological neurons are way more complex than this. There's many different kinds of biological neurons, the dendrites can perform really complex non-linear computations. Our synapses, right, the W-zeros that we had earlier where we drew this analogy, are not single weights like we had, they're actually really complex, you know, non-linear dynamical systems in practice, and also this idea of interpreting our activation function as a sort of rate code or firing rate is also, is insufficient in practice, you know. It's just this kind of firing rate is probably not a sufficient model of how neurons will actually communicate to downstream neurons, right, like even as a very simple way, there's a very, the neurons will fire at a variable rate, and this variability probably should be taken into account. And so there's all of these, you know, it's kind of a much more complex thing than what we're dealing with. There's references, for example this dendritic computation that you can look at if you're interested in this topic, but yeah, so that in practice, you know, we can sort of see how it may resemble a neuron at this very high level, but neurons are, in practice, much more complicated than that. Okay, so we talked about how there's many different kinds of activation functions that could be used, there's the ReLU that I mentioned earlier, and we'll talk about all of these different kinds of activation functions in much more detail later on, choices of these activation functions that you might want to use. And so we'll also talk about different kinds of neural network architectures. So we gave the example of these fully connected neural networks, right, where each layer is this matrix multiply, and so the way we actually want to call these is, we said two-layer neural network before, and that corresponded to the fact that we have two of these linear layers, right, where we're doing a matrix multiply, two fully connected layers is what we call these. We could also call this a one-hidden-layer neural network, so instead of counting the number of matrix multiplies we're doing, counting the number of hidden layers that we have. I think it's, you can use either, I think maybe two-layer neural network is something that's a little more commonly used. And then also here, for our three-layer neural network that we have, this can also be called a two-hidden-layer neural network. And so we saw that, you know, when we're doing this type of feed forward, right, forward pass through a neural network, each of these nodes in this network is basically doing the kind of operation of the neuron that I showed earlier, right? And so what's actually happening is is basically each hidden layer you can think of as a whole vector, right, a set of these neurons, and so by writing it out this way with these matrix multiplies to compute our neuron values, it's a way that we can efficiently evaluate this entire layer of neurons, right? So with one matrix multiply we get output values of, you know, of a layer of let's say 10, or 50 or 100 of neurons. All right, and so looking at this again, writing this out, all out in matrix form, matrix-vector form, we have our, you know, non-linearity here. F that we're using, in this case a sigmoid function, right, and we can take our data x, some input vector or our values, and we can apply our first matrix multiply, W-one on top of this, then our non-linearity, then a second matrix multiply to get a second hidden layer, h-two, and then we have our final output, right? And so, you know, this is basically all you need to be able to write a neural network, and as we saw earlier, the backward pass. You then just use backprop to compute all of those, and so that's basically all there is to kind of the main idea of what's a neural network. Okay, so just to summarize, we talked about how we could arrange neurons into these computations, right, of fully-connected or linear layers. This abstraction of a layer has a nice property that we can use very efficient vectorized code to compute all of these. We also talked about how it's important to keep in mind that neural networks do have some, you know, analogy and loose inspiration from biology, but they're not really neural. I mean, this is a pretty loose analogy that we're making, and next time we'll talk about convolutional neural networks. Okay, thanks.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_5_Convolutional_Neural_Networks.txt
- Okay, let's get started. Alright, so welcome to lecture five. Today we're going to be getting to the title of the class, Convolutional Neural Networks. Okay, so a couple of administrative details before we get started. Assignment one is due Thursday, April 20, 11:59 p.m. on Canvas. We're also going to be releasing assignment two on Thursday. Okay, so a quick review of last time. We talked about neural networks, and how we had the running example of the linear score function that we talked about through the first few lectures. And then we turned this into a neural network by stacking these linear layers on top of each other with non-linearities in between. And we also saw that this could help address the mode problem where we are able to learn intermediate templates that are looking for, for example, different types of cars, right. A red car versus a yellow car and so on. And to combine these together to come up with the final score function for a class. Okay, so today we're going to talk about convolutional neural networks, which is basically the same sort of idea, but now we're going to learn convolutional layers that reason on top of basically explicitly trying to maintain spatial structure. So, let's first talk a little bit about the history of neural networks, and then also how convolutional neural networks were developed. So we can go all the way back to 1957 with Frank Rosenblatt, who developed the Mark I Perceptron machine, which was the first implementation of an algorithm called the perceptron, which had sort of the similar idea of getting score functions, right, using some, you know, W times X plus a bias. But here the outputs are going to be either one or a zero. And then in this case we have an update rule, so an update rule for our weights, W, which also look kind of similar to the type of update rule that we're also seeing in backprop, but in this case there was no principled backpropagation technique yet, we just sort of took the weights and adjusted them in the direction towards the target that we wanted. So in 1960, we had Widrow and Hoff, who developed Adaline and Madaline, which was the first time that we were able to get, to start to stack these linear layers into multilayer perceptron networks. And so this is starting to now look kind of like this idea of neural network layers, but we still didn't have backprop or any sort of principled way to train this. And so the first time backprop was really introduced was in 1986 with Rumelhart. And so here we can start seeing, you know, these kinds of equations with the chain rule and the update rules that we're starting to get familiar with, right, and so this is the first time we started to have a principled way to train these kinds of network architectures. And so after that, you know, it still wasn't able to scale to very large neural networks, and so there was sort of a period in which there wasn't a whole lot of new things happening here, or a lot of popular use of these kinds of networks. And so this really started being reinvigorated around the 2000s, so in 2006, there was this paper by Geoff Hinton and Ruslan Salakhutdinov, which basically showed that we could train a deep neural network, and show that we could do this effectively. But it was still not quite the sort of modern iteration of neural networks. It required really careful initialization in order to be able to do backprop, and so what they had here was they would have this first pre-training stage, where you model each hidden layer through this kind of, through a restricted Boltzmann machine, and so you're going to get some initialized weights by training each of these layers iteratively. And so once you get all of these hidden layers you then use that to initialize your, you know, your full neural network, and then from there you do backprop and fine tuning of that. And so when we really started to get the first really strong results using neural networks, and what sort of really sparked the whole craze of starting to use these kinds of networks really widely was at around 2012, where we had first the strongest results using for speech recognition, and so this is work out of Geoff Hinton's lab for acoustic modeling and speech recognition. And then for image recognition, 2012 was the landmark paper from Alex Krizhevsky in Geoff Hinton's lab, which introduced the first convolutional neural network architecture that was able to do, get really strong results on ImageNet classification. And so it took the ImageNet, image classification benchmark, and was able to dramatically reduce the error on that benchmark. And so since then, you know, ConvNets have gotten really widely used in all kinds of applications. So now let's step back and take a look at what gave rise to convolutional neural networks specifically. And so we can go back to the 1950s, where Hubel and Wiesel did a series of experiments trying to understand how neurons in the visual cortex worked, and they studied this specifically for cats. And so we talked a little bit about this in lecture one, but basically in these experiments they put electrodes in the cat, into the cat brain, and they gave the cat different visual stimulus. Right, and so, things like, you know, different kinds of edges, oriented edges, different sorts of shapes, and they measured the response of the neurons to these stimuli. And so there were a couple of important conclusions that they were able to make, and observations. And so the first thing found that, you know, there's sort of this topographical mapping in the cortex. So nearby cells in the cortex also represent nearby regions in the visual field. And so you can see for example, on the right here where if you take kind of the spatial mapping and map this onto a visual cortex there's more peripheral regions are these blue areas, you know, farther away from the center. And so they also discovered that these neurons had a hierarchical organization. And so if you look at different types of visual stimuli they were able to find that at the earliest layers retinal ganglion cells were responsive to things that looked kind of like circular regions of spots. And then on top of that there are simple cells, and these simple cells are responsive to oriented edges, so different orientation of the light stimulus. And then going further, they discover that these were then connected to more complex cells, which were responsive to both light orientation as well as movement, and so on. And you get, you know, increasing complexity, for example, hypercomplex cells are now responsive to movement with kind of an endpoint, right, and so now you're starting to get the idea of corners and then blobs and so on. And so then in 1980, the neocognitron was the first example of a network architecture, a model, that had this idea of simple and complex cells that Hubel and Wiesel had discovered. And in this case Fukushima put these into these alternating layers of simple and complex cells, where you had these simple cells that had modifiable parameters, and then complex cells on top of these that performed a sort of pooling so that it was invariant to, you know, different minor modifications from the simple cells. And so this is work that was in the 1980s, right, and so by 1998 Yann LeCun basically showed the first example of applying backpropagation and gradient-based learning to train convolutional neural networks that did really well on document recognition. And specifically they were able to do a good job of recognizing digits of zip codes. And so these were then used pretty widely for zip code recognition in the postal service. But beyond that it wasn't able to scale yet to more challenging and complex data, right, digits are still fairly simple and a limited set to recognize. And so this is where Alex Krizhevsky, in 2012, gave the modern incarnation of convolutional neural networks and his network we sort of colloquially call AlexNet. But this network really didn't look so much different than the convolutional neural networks that Yann LeCun was dealing with. They're now, you know, they were scaled now to be larger and deeper and able to, the most important parts were that they were now able to take advantage of the large amount of data that's now available, in web images, in ImageNet data set. As well as take advantage of the parallel computing power in GPUs. And so we'll talk more about that later. But fast forwarding today, so now, you know, ConvNets are used everywhere. And so we have the initial classification results on ImageNet from Alex Krizhevsky. This is able to do a really good job of image retrieval. You can see that when we're trying to retrieve a flower for example, the features that are learned are really powerful for doing similarity matching. We also have ConvNets that are used for detection. So we're able to do a really good job of localizing where in an image is, for example, a bus, or a boat, and so on, and draw precise bounding boxes around that. We're able to go even deeper beyond that to do segmentation, right, and so these are now richer tasks where we're not looking for just the bounding box but we're actually going to label every pixel in the outline of, you know, trees, and people, and so on. And these kind of algorithms are used in, for example, self-driving cars, and a lot of this is powered by GPUs as I mentioned earlier, that's able to do parallel processing and able to efficiently train and run these ConvNets. And so we have modern powerful GPUs as well as ones that work in embedded systems, for example, that you would use in a self-driving car. So we can also look at some of the other applications that ConvNets are used for. So, face-recognition, right, we can put an input image of a face and get out a likelihood of who this person is. ConvNets are applied to video, and so this is an example of a video network that looks at both images as well as temporal information, and from there is able to classify videos. We're also able to do pose recognition. Being able to recognize, you know, shoulders, elbows, and different joints. And so here are some images of our fabulous TA, Lane, in various kinds of pretty non-standard human poses. But ConvNets are able to do a pretty good job of pose recognition these days. They're also used in game playing. So some of the work in reinforcement learning, deeper enforcement learning that you may have seen, playing Atari games, and Go, and so on, and ConvNets are an important part of all of these. Some other applications, so they're being used for interpretation and diagnosis of medical images, for classification of galaxies, for street sign recognition. There's also whale recognition, this is from a recent Kaggle Challenge. We also have examples of looking at aerial maps and being able to draw out where are the streets on these maps, where are buildings, and being able to segment all of these. And then beyond recognition of classification detection, these types of tasks, we also have tasks like image captioning, where given an image, we want to write a sentence description about what's in the image. And so this is something that we'll go into a little bit later in the class. And we also have, you know, really, really fancy and cool kind of artwork that we can do using neural networks. And so on the left is an example of a deep dream, where we're able to take images and kind of hallucinate different kinds of objects and concepts in the image. There's also neural style type work, where we take an image and we're able to re-render this image using a style of a particular artist and artwork, right. And so here we can take, for example, Van Gogh on the right, Starry Night, and use that to redraw our original image using that style. And Justin has done a lot of work in this and so if you guys are interested, these are images produced by some of his code and you guys should talk to him more about it. Okay, so basically, you know, this is just a small sample of where ConvNets are being used today. But there's really a huge amount that can be done with this, right, and so, you know, for you guys' projects, sort of, you know, let your imagination go wild and we're excited to see what sorts of applications you can come up with. So today we're going to talk about how convolutional neural networks work. And again, same as with neural networks, we're going to first talk about how they work from a functional perspective without any of the brain analogies. And then we'll talk briefly about some of these connections. Okay, so, last lecture, we talked about this idea of a fully connected layer. And how, you know, for a fully connected layer what we're doing is we operate on top of these vectors, right, and so let's say we have, you know, an image, a 3D image, 32 by 32 by three, so some of the images that we were looking at previously. We'll take that, we'll stretch all of the pixels out, right, and then we have this 3072 dimensional vector, for example in this case. And then we have these weights, right, so we're going to multiply this by a weight matrix. And so here for example our W we're going to say is 10 by 3072. And then we're going to get the activations, the output of this layer, right, and so in this case, we take each of our 10 rows and we do this dot product with 3072 dimensional input. And from there we get this one number that's kind of the value of that neuron. And so in this case we're going to have 10 of these neuron outputs. And so a convolutional layer, so the main difference between this and the fully connected layer that we've been talking about is that here we want to preserve spatial structure. And so taking this 32 by 32 by three image that we had earlier, instead of stretching this all out into one long vector, we're now going to keep the structure of this image, right, this three dimensional input. And then what we're going to do is our weights are going to be these small filters, so in this case for example, a five by five by three filter, and we're going to take this filter and we're going to slide it over the image spatially and compute dot products at every spatial location. And so we're going to go into detail of exactly how this works. So, our filters, first of all, always extend the full depth of the input volume. And so they're going to be just a smaller spatial area, so in this case five by five, right, instead of our full 32 by 32 spatial input, but they're always going to go through the full depth, right, so here we're going to take five by five by three. And then we're going to take this filter and at a given spatial location we're going to do a dot product between this filter and then a chunk of a image. So we're just going to overlay this filter on top of a spatial location in the image, right, and then do the dot product, the multiplication of each element of that filter with each corresponding element in that spatial location that we've just plopped it on top of. And then this is going to give us a dot product. So in this case, we have five times five times three, this is the number of multiplications that we're going to do, right, plus the bias term. And so this is basically taking our filter W and basically doing W transpose times X and plus bias. So is that clear how this works? Yeah, question. [faint speaking] Yeah, so the question is, when we do the dot product do we turn the five by five by three into one vector? Yeah, in essence that's what you're doing. You can, I mean, you can think of it as just plopping it on and doing the element-wise multiplication at each location, but this is going to give you the same result as if you stretched out the filter at that point, stretched out the input volume that it's laid over, and then took the dot product, and that's what's written here, yeah, question. [faint speaking] Oh, this is, so the question is, any intuition for why this is a W transpose? And this was just, not really, this is just the notation that we have here to make the math work out as a dot product. So it just depends on whether, how you're representing W and whether in this case if we look at the W matrix this happens to be each column and so we're just taking the transpose to get a row out of it. But there's no intuition here, we're just taking the filters of W and we're stretching it out into a one D vector, and in order for it to be a dot product it has to be like a one by, one by N vector. [faint speaking] Okay, so the question is, is W here not five by five by three, it's one by 75. So that's the case, right, if we're going to do this dot product of W transpose times X, we have to stretch it out first before we do the dot product. So we take the five by five by three, and we just take all these values and stretch it out into a long vector. And so again, similar to the other question, the actual operation that we're doing here is plopping our filter on top of a spatial location in the image and multiplying all of the corresponding values together, but in order just to make it kind of an easy expression similar to what we've seen before we can also just stretch each of these out, make sure that dimensions are transposed correctly so that it works out as a dot product. Yeah, question. [faint speaking] Okay, the question is, how do we slide the filter over the image. We'll go into that next, yes. [faint speaking] Okay, so the question is, should we rotate the kernel by 180 degrees to better match the convolution, the definition of a convolution. And so the answer is that we'll also show the equation for this later, but we're using convolution as kind of a looser definition of what's happening. So for people from signal processing, what we are actually technically doing, if you want to call this a convolution, is we're convolving with the flipped version of the filter. But for the most part, we just don't worry about this and we just, yeah, do this operation and it's like a convolution in spirit. Okay, so... Okay, so we had a question earlier, how do we, you know, slide this over all the spatial locations. Right, so what we're going to do is we're going to take this filter, we're going to start at the upper left-hand corner and basically center our filter on top of every pixel in this input volume. And at every position, we're going to do this dot product and this will produce one value in our output activation map. And so then we're going to just slide this around. The simplest version is just at every pixel we're going to do this operation and fill in the corresponding point in our output activation. You can see here that the dimensions are not exactly what would happen, right, if you're going to do this. I had 32 by 32 in the input and I'm having 28 by 28 in the output, and so we'll go into examples later of the math of exactly how this is going to work out dimension-wise, but basically you have a choice of how you're going to slide this, whether you go at every pixel or whether you slide, let's say, you know, two input values over at a time, two pixels over at a time, and so you can get different size outputs depending on how you choose to slide. But you're basically doing this operation in a grid fashion. Okay, so what we just saw earlier, this is taking one filter, sliding it over all of the spatial locations in the image and then we're going to get this activation map out, right, which is the value of that filter at every spatial location. And so when we're dealing with a convolutional layer, we want to work with multiple filters, right, because each filter is kind of looking for a specific type of template or concept in the input volume. And so we're going to have a set of multiple filters, and so here I'm going to take a second filter, this green filter, which is again five by five by three, I'm going to slide this over all of the spatial locations in my input volume, and then I'm going to get out this second green activation map also of the same size. And we can do this for as many filters as we want to have in this layer. So for example, if we have six filters, six of these five by five filters, then we're going to get in total six activation maps out. All of, so we're going to get this output volume that's going to be basically six by 28 by 28. Right, and so a preview of how we're going to use these convolutional layers in our convolutional network is that our ConvNet is basically going to be a sequence of these convolutional layers stacked on top of each other, same way as what we had with the simple linear layers in their neural network. And then we're going to intersperse these with activation functions, so for example, a ReLU activation function. Right, and so you're going to get something like Conv, ReLU, and usually also some pooling layers, and then you're just going to get a sequence of these each creating an output that's now going to be the input to the next convolutional layer. Okay, and so each of these layers, as I said earlier, has multiple filters, right, many filters. And each of the filter is producing an activation map. And so when you look at multiple of these layers stacked together in a ConvNet, what ends up happening is you end up learning this hierarching of filters, where the filters at the earlier layers usually represent low-level features that you're looking for. So things kind of like edges, right. And then at the mid-level, you're going to get more complex kinds of features, so maybe it's looking more for things like corners and blobs and so on. And then at higher-level features, you're going to get things that are starting to more resemble concepts than blobs. And we'll go into more detail later in the class in how you can actually visualize all these features and try and interpret what your network, what kinds of features your network is learning. But the important thing for now is just to understand that what these features end up being when you have a whole stack of these, is these types of simple to more complex features. [faint speaking] Yeah. Oh, okay. Oh, okay, so the question is, what's the intuition for increasing the depth each time. So here I had three filters in the original layer and then six filters in the next layer. Right, and so this is mostly a design choice. You know, people in practice have found certain types of these configurations to work better. And so later on we'll go into case studies of different kinds of convolutional neural network architectures and design choices for these and why certain ones work better than others. But yeah, basically the choice of, you're going to have many design choices in a convolutional neural network, the size of your filter, the stride, how many filters you have, and so we'll talk about this all more later. Question. [faint speaking] Yeah, so the question is, as we're sliding this filter over the image spatially it looks like we're sampling the edges and corners less than the other locations. Yeah, that's a really good point, and we'll talk I think in a few slides about how we try and compensate for that. Okay, so each of these convolutional layers that we have stacked together, we saw how we're starting with more simpler features and then aggregating these into more complex features later on. And so in practice this is compatible with what Hubel and Wiesel noticed in their experiments, right, that we had these simple cells at the earlier stages of processing, followed by more complex cells later on. And so even though we didn't explicitly force our ConvNet to learn these kinds of features, in practice when you give it this type of hierarchical structure and train it using backpropagation, these are the kinds of filters that end up being learned. [faint speaking] Okay, so yeah, so the question is, what are we seeing in these visualizations. And so, alright so, in these visualizations, like, if we look at this Conv1, the first convolutional layer, each of these grid, each part of this grid is a one neuron. And so what we've visualized here is what the input looks like that maximizes the activation of that particular neuron. So what sort of image you would get that would give you the largest value, make that neuron fire and have the largest value. And so the way we do this is basically by doing backpropagation from a particular neuron activation and seeing what in the input will trigger, will give you the highest values of this neuron. And this is something that we'll talk about in much more depth in a later lecture about how we create all of these visualizations. But basically each element of these grids is showing what in the input would look like that basically maximizes the activation of the neuron. So in a sense, what is the neuron looking for? Okay, so here is an example of some of the activation maps produced by each filter, right. So we can visualize up here on the top we have this whole row of example five by five filters, and so this is basically a real case from a trained ConvNet where each of these is what a five by five filter looks like, and then as we convolve this over an image, so in this case this I think it's like a corner of a car, the car light, what the activation looks like. Right, and so here for example, if we look at this first one, this red filter, filter like with a red box around it, we'll see that it's looking for, the template looks like an edge, right, an oriented edge. And so if you slide it over the image, it'll have a high value, a more white value where there are edges in this type of orientation. And so each of these activation maps is kind of the output of sliding one of these filters over and where these filters are causing, you know, where this sort of template is more present in the image. And so the reason we call these convolutional is because this is related to the convolution of two signals, and so someone pointed out earlier that this is basically this convolution equation over here, for people who have seen convolutions before in signal processing, and in practice it's actually more like a correlation where we're convolving with the flipped version of the filter, but this is kind of a subtlety, it's not really important for the purposes of this class. But basically if you're writing out what you're doing, it has an expression that looks something like this, which is the standard definition of a convolution. But this is basically just taking a filter, sliding it spatially over the image and computing the dot product at every location. Okay, so you know, as I had mentioned earlier, like what our total convolutional neural network is going to look like is we're going to have an input image, and then we're going to pass it through this sequence of layers, right, where we're going to have a convolutional layer first. We usually have our non-linear layer after that. So ReLU is something that's very commonly used that we're going to talk about more later. And then we have these Conv, ReLU, Conv, ReLU layers, and then once in a while we'll use a pooling layer that we'll talk about later as well that basically downsamples the size of our activation maps. And then finally at the end of this we'll take our last convolutional layer output and then we're going to use a fully connected layer that we've seen before, connected to all of these convolutional outputs, and use that to get a final score function basically like what we've already been working with. Okay, so now let's work out some examples of how the spatial dimensions work out. So let's take our 32 by 32 by three image as before, right, and we have our five by five by three filter that we're going to slide over this image. And we're going to see how we're going to use that to produce exactly this 28 by 28 activation map. So let's assume that we actually have a seven by seven input just to be simpler, and let's assume we have a three by three filter. So what we're going to do is we're going to take this filter, plop it down in our upper left-hand corner, right, and we're going to multiply, do the dot product, multiply all these values together to get our first value, and this is going to go into the upper left-hand value of our activation map. Right, and then what we're going to do next is we're just going to take this filter, slide it one position to the right, and then we're going to get another value out from here. And so we can continue with this to have another value, another, and in the end what we're going to get is a five by five output, right, because what fit was basically sliding this filter a total of five spatial locations horizontally and five spatial locations vertically. Okay, so as I said before there's different kinds of design choices that we can make. Right, so previously I slid it at every single spatial location and the interval at which I slide I'm going to call the stride. And so previously we used the stride of one. And so now let's see what happens if we have a stride of two. Right, so now we're going to take our first location the same as before, and then we're going to skip this time two pixels over and we're going to get our next value centered at this location. Right, and so now if we use a stride of two, we have in total three of these that can fit, and so we're going to get a three by three output. Okay, and so what happens when we have a stride of three, what's the output size of this? And so in this case, right, we have three, we slide it over by three again, and the problem is that here it actually doesn't fit. Right, so we slide it over by three and now it doesn't fit nicely within the image. And so what we in practice we just, it just doesn't work. We don't do convolutions like this because it's going to lead to asymmetric outputs happening. Right, and so just kind of looking at the way that we computed how many, what the output size is going to be, this actually can work into a nice formula where we take our dimension of our input N, we have our filter size F, we have our stride at which we're sliding along, and our final output size, the spatial dimension of each output size is going to be N minus F divided by the stride plus one, right, and you can kind of see this as a, you know, if I'm going to take my filter, let's say I fill it in at the very last possible position that it can be in and then take all the pixels before that, how many instances of moving by this stride can I fit in. Right, and so that's how this equation kind of works out. And so as we saw before, right, if we have N equal seven and F equals three, if we want a stride of one we plug it into this formula, we get five by five as we had before, and the same thing we had for two. And with a stride of three, this doesn't really work out. And so in practice it's actually common to zero pad the borders in order to make the size work out to what we want it to. And so this is kind of related to a question earlier, which is what do we do, right, at the corners. And so what in practice happens is we're going to actually pad our input image with zeros and so now you're going to be able to place a filter centered at the upper right-hand pixel location of your actual input image. Okay, so here's a question, so who can tell me if I have my same input, seven by seven, three by three filter, stride one, but now I pad with a one pixel border, what's the size of my output going to be? [faint speaking] So, I heard some sixes, heard some sev, so remember we have this formula that we had before. So if we plug in N is equal to seven, F is equal to three, right, and then our stride is equal to one. So what we actually get, so actually this is giving us seven, four, so seven minus three is four, divided by one plus one is five. And so this is what we had before. So we actually need to adjust this formula a little bit, right, so this was actually, this formula is the case where we don't have zero padded pixels. But if we do pad it, then if you now take your new output and you slide it along, you'll see that actually seven of the filters fit, so you get a seven by seven output. And plugging in our original formula, right, so our N now is not seven, it's nine, so if we go back here we have N equals nine minus a filter size of three, which gives six. Right, divided by our stride, which is one, and so still six, and then plus one we get seven. Right, and so once you've padded it you want to incorporate this padding into your formula. Yes, question. [faint speaking] Seven, okay, so the question is, what's the actual output of the size, is it seven by seven or seven by seven by three? The output is going to be seven by seven by the number of filters that you have. So remember each filter is going to do a dot product through the entire depth of your input volume. But then that's going to produce one number, right, so each filter is, let's see if we can go back here. Each filter is producing a one by seven by seven in this case activation map output, and so the depth is going to be the number of filters that we have. [faint speaking] Sorry, let me just, one second go back. Okay, can you repeat your question again? [muffled speaking] Okay, so the question is, how does this connect to before when we had a 32 by 32 by three input, right. So our input had depth and here in this example I'm showing a 2D example with no depth. And so yeah, I'm showing this for simplicity but in practice you're going to take your, you're going to multiply throughout the entire depth as we had before, so you're going to, your filter is going to be in this case a three be three spatial filter by whatever input depth that you had. So three by three by three in this case. Yeah, everything else stays the same. Yes, question. [muffled speaking] Yeah, so the question is, does the zero padding add some sort of extraneous features at the corners? And yeah, so I mean, we're doing our best to still, get some value and do, like, process that region of the image, and so zero padding is kind of one way to do this, where I guess we can, we are detecting part of this template in this region. There's also other ways to do this that, you know, you can try and like, mirror the values here or extend them, and so it doesn't have to be zero padding, but in practice this is one thing that works reasonably. And so, yeah, so there is a little bit of kind of artifacts at the edge and we sort of just, you do your best to deal with it. And in practice this works reasonably. I think there was another question. Yeah, question. [faint speaking] So if we have non-square images, do we ever use a stride that's different horizontally and vertically? So, I mean, there's nothing stopping you from doing that, you could, but in practice we just usually take the same stride, we usually operate square regions and we just, yeah we usually just take the same stride everywhere and it's sort of like, in a sense it's a little bit like, it's a little bit like the resolution at which you're, you know, looking at this image, and so usually there's kind of, you might want to match sort of your horizontal and vertical resolutions. But, yeah, so in practice you could but really people don't do that. Okay, another question. [faint speaking] So the question is, why do we do zero padding? So the way we do zero padding is to maintain the same input size as we had before. Right, so we started with seven by seven, and if we looked at just starting your filter from the upper left-hand corner, filling everything in, right, then we get a smaller size output, but we would like to maintain our full size output. Okay, so, yeah, so we saw how padding can basically help you maintain the size of the output that you want, as well as apply your filter at these, like, corner regions and edge regions. And so in general in terms of choosing, you know, your stride, your filter, your filter size, your stride size, zero padding, what's common to see is filters of size three by three, five by five, seven by seven, these are pretty common filter sizes. And so each of these, for three by three you will want to zero pad with one in order to maintain the same spatial size. If you're going to do five by five, you can work out the math, but it's going to come out to you want to zero pad by two. And then for seven you want to zero pad by three. Okay, and so again you know, the motivation for doing this type of zero padding and trying to maintain the input size, right, so we kind of alluded to this before, but if you have multiple of these layers stacked together... So if you have multiple of these layers stacked together you'll see that, you know, if we don't do this kind of zero padding, or any kind of padding, we're going to really quickly shrink the size of the outputs that we have. Right, and so this is not something that we want. Like, you can imagine if you have a pretty deep network then very quickly your, the size of your activation maps is going to shrink to something very small. And this is bad both because we're kind of losing out on some of this information, right, now you're using a much smaller number of values in order to represent your original image, so you don't want that. And then at the same time also as we talked about this earlier, your also kind of losing sort of some of this edge information, corner information that each time we're losing out and shrinking that further. Okay, so let's go through a couple more examples of computing some of these sizes. So let's say that we have an input volume which is 32 by 32 by three. And here we have 10 five by five filters. Let's use stride one and pad two. And so who can tell me what's the output volume size of this? So you can think about the formula earlier. Sorry, what was it? [faint speaking] 32 by 32 by 10, yes that's correct. And so the way we can see this, right, is so we have our input size, F is 32. Then in this case we want to augment it by the padding that we added onto this. So we padded it two in each dimension, right, so we're actually going to get, total width and total height's going to be 32 plus four on each side. And then minus our filter size five, divided by one plus one and we get 32. So our output is going to be 32 by 32 for each filter. And then we have 10 filters total, so we have 10 of these activation maps, and our total output volume is going to be 32 by 32 by 10. Okay, next question, so what's the number of parameters in this layer? So remember we have 10 five by five filters. [faint speaking] I kind of heard something, but it was quiet. Can you guys speak up? 250, okay so I heard 250, which is close, but remember that we're also, our input volume, each of these filters goes through by depth. So maybe this wasn't clearly written here because each of the filters is five by five spatially, but implicitly we also have the depth in here, right. It's going to go through the whole volume. So I heard, yeah, 750 I heard. Almost there, this is kind of a trick question 'cause also remember we usually always have a bias term, right, so in practice each filter has five by five by three weights, plus our one bias term, we have 76 parameters per filter, and then we have 10 of these total, and so there's 760 total parameters. Okay, and so here's just a summary of the convolutional layer that you guys can read a little bit more carefully later on. But we have our input volume of a certain dimension, we have all of these choice, we have our filters, right, where we have number of filters, the filter size, the stride of the size, the amount of zero padding, and you basically can use all of these, go through the computations that we talked about earlier in order to find out what your output volume is actually going to be and how many total parameters that you have. And so some common settings of this. You know, we talked earlier about common filter sizes of three by three, five by five. Stride is usually one and two is pretty common. And then your padding P is going to be whatever fits, like, whatever will preserve your spatial extent is what's common. And then the total number of filters K, usually we use powers of two just to be nice, so, you know, 32, 64, 128 and so on, 512, these are pretty common numbers that you'll see. And just as an aside, we can also do a one by one convolution, this still makes perfect sense where given a one by one convolution we still slide it over each spatial extent, but now, you know, the spatial region is not really five by five it's just kind of the trivial case of one by one, but we are still having this filter go through the entire depth. Right, so this is going to be a dot product through the entire depth of your input volume. And so the output here, right, if we have an input volume of 56 by 56 by 64 depth and we're going to do one by one convolution with 32 filters, then our output is going to be 56 by 56 by our number of filters, 32. Okay, and so here's an example of a convolutional layer in TORCH, a deep learning framework. And so you'll see that, you know, last lecture we talked about how you can go into these deep learning frameworks, you can see these definitions of each layer, right, where they have kind of the forward pass and the backward pass implemented for each layer. And so you'll see convolutions, spatial convolution is going to be just one of these, and then the arguments that it's going to take are going to be all of these design choices of, you know, I mean, I guess your input and output sizes, but also your choices of like your kernel width, your kernel size, padding, and these kinds of things. Right, and so if we look at another framework, Caffe, you'll see something very similar, where again now when you're defining your network you define networks in Caffe using this kind of, you know, proto text file where you're specifying each of your design choices for your layer and you can see for a convolutional layer will say things like, you know, the number of outputs that we have, this is going to be the number of filters for Caffe, as well as the kernel size and stride and so on. Okay, and so I guess before I go on, any questions about convolution, how the convolution operation works? Yes, question. [faint speaking] Yeah, so the question is, what's the intuition behind how you choose your stride. And so at one sense it's kind of the resolution at which you slide it on, and usually the reason behind this is because when we have a larger stride what we end up getting as the output is a down sampled image, right, and so what this downsampled image lets us have is both, it's a way, it's kind of like pooling in a sense but it's just a different and sometimes works better way of doing pooling is one of the intuitions behind this, 'cause you get the same effect of downsampling your image, and then also as you're doing this you're reducing the size of the activation maps that you're dealing with at each layer, right, and so this also affects later on the total number of parameters that you have because for example at the end of all your Conv layers, now you might put on fully connected layers on top, for example, and now the fully connected layer's going to be connected to every value of your convolutional output, right, and so a smaller one will give you smaller number of parameters, and so now you can get into, like, basically thinking about trade offs of, you know, number of parameters you have, the size of your model, overfitting, things like that, and so yeah, these are kind of some of the things that you want to think about with choosing your stride. Okay, so now if we look a little bit at kind of the, you know, brain neuron view of a convolutional layer, similar to what we looked at for the neurons in the last lecture. So what we have is that at every spatial location, we take a dot product between a filter and a specific part of the image, right, and we get one number out from here. And so this is the same idea of doing these types of dot products, right, taking your input, weighting it by these Ws, right, values of your filter, these weights that are the synapses, and getting a value out. But the main difference here is just that now your neuron has local connectivity. So instead of being connected to the entire input, it's just looking at a local region spatially of your image. And so this looks at a local region and then now you're going to get kind of, you know, this, how much this neuron is being triggered at every spatial location in your image. Right, so now you preserve the spatial structure and you can say, you know, be able to reason on top of these kinds of activation maps in later layers. And just a little bit of terminology, again for, you know, we have this five by five filter, we can also call this a five by five receptive field for the neuron, because this is, the receptive field is basically the, you know, input field that this field of vision that this neuron is receiving, right, and so that's just another common term that you'll hear for this. And then again remember each of these five by five filters we're sliding them over the spatial locations but they're the same set of weights, they share the same parameters. Okay, and so, you know, as we talked about what we're going to get at this output is going to be this volume, right, where spatially we have, you know, let's say 28 by 28 and then our number of filters is the depth. And so for example with five filters, what we're going to get out is this 3D grid that's 28 by 28 by five. And so if you look at the filters across in one spatial location of the activation volume and going through depth these five neurons, all of these neurons, basically the way you can interpret this is they're all looking at the same region in the input volume, but they're just looking for different things, right. So they're different filters applied to the same spatial location in the image. And so just a reminder again kind of comparing with the fully connected layer that we talked about earlier. In that case, right, if we look at each of the neurons in our activation or output, each of the neurons was connected to the entire stretched out input, so it looked at the entire full input volume, compared to now where each one just looks at this local spatial region. Question. [muffled talking] Okay, so the question is, within a given layer, are the filters completely symmetric? So what do you mean by symmetric exactly, I guess? Right, so okay, so the filters, are the filters doing, they're doing the same dimension, the same calculation, yes. Okay, so is there anything different other than they have the same parameter values? No, so you're exactly right, we're just taking a filter with a given set of, you know, five by five by three parameter values, and we just slide this in exactly the same way over the entire input volume to get an activation map. Okay, so you know, we've gone into a lot of detail in what these convolutional layers look like, and so now I'm just going to go briefly through the other layers that we have that form this entire convolutional network. Right, so remember again, we have convolutional layers interspersed with pooling layers once in a while as well as these non-linearities. Okay, so what the pooling layers do is that they make the representations smaller and more manageable, right, so we talked about this earlier with someone asked a question of why we would want to make the representation smaller. And so this is again for it to have fewer, it effects the number of parameters that you have at the end as well as basically does some, you know, invariance over a given region. And so what the pooling layer does is it does exactly just downsamples, and it takes your input volume, so for example, 224 by 224 by 64, and spatially downsamples this. So in the end you'll get out 112 by 112. And it's important to note this doesn't do anything in the depth, right, we're only pooling spatially. So the number of, your input depth is going to be the same as your output depth. And so, for example, a common way to do this is max pooling. So in this case our pooling layer also has a filter size and this filter size is going to be the region at which we pool over, right, so in this case if we have two by two filters, we're going to slide this, and so, here, we also have stride two in this case, so we're going to take this filter and we're going to slide it along our input volume in exactly the same way as we did for convolution. But here instead of doing these dot products, we just take the maximum value of the input volume in that region. Right, so here if we look at the red values, the value of that will be six is the largest. If we look at the greens it's going to give an eight, and then we have a three and a four. Yes, question. [muffled speaking] Yeah, so the question is, is it typical to set up the stride so that there isn't an overlap? And yeah, so for the pooling layers it is, I think the more common thing to do is to have them not have any overlap, and I guess the way you can think about this is basically we just want to downsample and so it makes sense to kind of look at this region and just get one value to represent this region and then just look at the next region and so on. Yeah, question. [faint speaking] Okay, so the question is, why is max pooling better than just taking the, doing something like average pooling? Yes, that's a good point, like, average pooling is also something that you can do, and intuition behind why max pooling is commonly used is that it can have this interpretation of, you know, if this is, these are activations of my neurons, right, and so each value is kind of how much this neuron fired in this location, how much this filter fired in this location. And so you can think of max pooling as saying, you know, giving a signal of how much did this filter fire at any location in this image. Right, and if we're thinking about detecting, you know, doing recognition, this might make some intuitive sense where you're saying, well, you know, whether a light or whether some aspect of your image that you're looking for, whether it happens anywhere in this region we want to fire at with a high value. Question. [muffled speaking] Yeah, so the question is, since pooling and stride both have the same effect of downsampling, can you just use stride instead of pooling and so on? Yeah, and so in practice I think looking at more recent neural network architectures people have begun to use stride more in order to do the downsampling instead of just pooling. And I think this gets into things like, you know, also like fractional strides and things that you can do. But in practice this in a sense maybe has a little bit better way to get better results using that, so. Yeah, so I think using stride is definitely, you can do it and people are doing it. Okay, so let's see, where were we. Okay, so yeah, so with these pooling layers, so again, there's right, some design choices that you make, you take this input volume of W by H by D, and then you're going to set your hyperparameters for design choices of your filter size or the spatial extent over which you are pooling, as well as your stride, and then you can again compute your output volume using the same equation that you used earlier for convolution, it still applies here, right, so we still have our W total extent minus filter size divided by stride plus one. Okay, and so just one other thing to note, it's also, typically people don't really use zero padding for the pooling layers because you're just trying to do a direct downsampling, right, so there isn't this problem of like, applying a filter at the corner and having some part of the filter go off your input volume. And so for pooling we don't usually have to worry about this and we just directly downsample. And so some common settings for the pooling layer is a filter size of two by two or three by three strides. Two by two, you know, you can have, also you can still have pooling of two by two even with a filter size of three by three, I think someone asked that earlier, but in practice it's pretty common just to have two by two. Okay, so now we've talked about these convolutional layers, the ReLU layers were the same as what we had before with the, you know, just the base neural network that we talked about last lecture. So we intersperse these and then we have a pooling layer every once in a while when we feel like downsampling, right. And then the last thing is that at the end we want to have a fully connected layer. And so this will be just exactly the same as the fully connected layers that you've seen before. So in this case now what we do is we take the convolutional network output, at the last layer we have some volume, so we're going to have width by height by some depth, and we just take all of these and we essentially just stretch these out, right. And so now we're going to get the same kind of, you know, basically 1D input that we're used to for a vanilla neural network, and then we're going to apply this fully connected layer on top, so now we're going to have connections to every one of these convolutional map outputs. And so what you can think of this is basically, now instead of preserving, you know, before we were preserving spatial structure, right, and so but at the last layer at the end, we want to aggregate all of this together and we want to reason basically on top of all of this as we had before. And so what you get from that is just our score outputs as we had earlier. Okay, so-- - [Student] This is sort of a silly question about this visual. Like what are the 16 pixels that are on the far right, like what should be interpreting those as? - Okay, so the question is, what are the 16 pixels that are on the far right, do you mean the-- - [Student] Like that column of-- - [Instructor] Oh, each column. - [Student] The column on the far right, yeah. - [Instructor] The green ones or the black ones? - [Student] The ones labeled pool. - The one with hold on, pool. Oh, okay, yeah, so the question is how do we interpret this column, right, for example at pool. And so what we're showing here is each of these columns is the output activation maps, right, the output from one of these layers. And so starting from the beginning, we have our car, after the convolutional layer we now have these activation maps of each of the filters slid spatially over the input image. Then we pass that through a ReLU, so you can see the values coming out from there. And then going all the way over, and so what you get for the pooling layer is that it's really just taking the output of the ReLU layer that came just before it and then it's pooling it. So it's going to downsample it, right, and then it's going to take the max value in each filter location. And so now if you look at this pool layer output, like, for example, the last one that you were mentioning, it looks the same as this ReLU output except that it's downsampled and that it has this kind of max value at every spatial location and so that's the minor difference that you'll see between those two. [distant speaking] So the question is, now this looks like just a very small amount of information, right, so how can it know to classify it from here? And so the way that you should think about this is that each of these values inside one of these pool outputs is actually, it's the accumulation of all the processing that you've done throughout this entire network, right. So it's at the very top of your hierarchy, and so each actually represents kind of a higher level concept. So we saw before, you know, for example, Hubel and Wiesel and building up these hierarchical filters, where at the bottom level we're looking for edges, right, or things like very simple structures, like edges. And so after your convolutional layer the outputs that you see here in this first column is basically how much do specific, for example, edges, fire at different locations in the image. But then as you go through you're going to get more complex, it's looking for more complex things, right, and so the next convolutional layer is going to fire at how much, you know, let's say certain kinds of corners show up in the image, right, because it's reasoning. Its input is not the original image, its input is the output, it's already the edge maps, right, so it's reasoning on top of edge maps, and so that allows it to get more complex, detect more complex things. And so by the time you get all the way up to this last pooling layer, each value is representing how much a relatively complex sort of template is firing. Right, and so because of that now you can just have a fully connected layer, you're just aggregating all of this information together to get, you know, a score for your class. So each of these values is how much a pretty complicated complex concept is firing. Question. [faint speaking] So the question is, when do you know you've done enough pooling to do the classification? And the answer is you just try and see. So in practice, you know, these are all design choices and you can think about this a little bit intuitively, right, like you want to pool but if you pool too much you're going to have very few values representing your entire image and so on, so it's just kind of a trade off. Something reasonable versus people have tried a lot of different configurations so you'll probably cross validate, right, and try over different pooling sizes, different filter sizes, different number of layers, and see what works best for your problem because yeah, like every problem with different data is going to, you know, different set of these sorts of hyperparameters might work best. Okay, so last thing, just wanted to point you guys to this demo of training a ConvNet, which was created by Andre Karpathy, the originator of this class. And so he wrote up this demo where you can basically train a ConvNet on CIFAR-10, the dataset that we've seen before, right, with 10 classes. And what's nice about this demo is you can, it basically plots for you what each of these filters look like, what the activation maps look like. So some of the images I showed earlier were taken from this demo. And so you can go try it out, play around with it, and you know, just go through and try and get a sense for what these activation maps look like. And just one thing to note, usually the first layer activation maps are, you can interpret them, right, because they're operating directly on the input image so you can see what these templates mean. As you get to higher level layers it starts getting really hard, like how do you actually interpret what do these mean. So for the most part it's just hard to interpret so you shouldn't, you know, don't worry if you can't really make sense of what's going on. But it's still nice just to see the entire flow and what outputs are coming out. Okay, so in summary, so today we talked about how convolutional neural networks work, how they're basically stacks of these convolutional and pooling layers followed by fully connected layers at the end. There's been a trend towards having smaller filters and deeper architectures, so we'll talk more about case studies for some of these later on. There's also been a trend towards getting rid of these pooling and fully connected layers entirely. So just keeping these, just having, you know, Conv layers, very deep networks of Conv layers, so again we'll discuss all of this later on. And then typical architectures again look like this, you know, as we had earlier. Conv, ReLU for some N number of steps followed by a pool every once in a while, this whole thing repeated some number of times, and then followed by fully connected ReLU layers that we saw earlier, you know, one or two or just a few of these, and then a softmax at the end for your class scores. And so, you know, some typical values you might have N up to five of these. You're going to have pretty deep layers of Conv, ReLU, pool sequences, and then usually just a couple of these fully connected layers at the end. But we'll also go into some newer architectures like ResNet and GoogLeNet, which challenge this and will give pretty different types of architectures. Okay, thank you and see you guys next time.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_13_Generative_Models.txt
- Okay we have a lot to cover today so let's get started. Today we'll be talking about Generative Models. And before we start, a few administrative details. So midterm grades will be released on Gradescope this week A reminder that A3 is due next Friday May 26th. The HyperQuest deadline for extra credit you can do this still until Sunday May 21st. And our poster session is June 6th from 12 to 3 P.M.. Okay so an overview of what we're going to talk about today we're going to switch gears a little bit and take a look at unsupervised learning today. And in particular we're going to talk about generative models which is a type of unsupervised learning. And we'll look at three types of generative models. So pixelRNNs and pixelCNNs variational autoencoders and Generative Adversarial networks. So so far in this class we've talked a lot about supervised learning and different kinds of supervised learning problems. So in the supervised learning set up we have our data X and then we have some labels Y. And our goal is to learn a function that's mapping from our data X to our labels Y. And these labels can take many different types of forms. So for example, we've looked at classification where our input is an image and we want to output Y, a class label for the category. We've talked about object detection where now our input is still an image but here we want to output the bounding boxes of instances of up to multiple dogs or cats. We've talked about semantic segmentation where here we have a label for every pixel the category that every pixel belongs to. And we've also talked about image captioning where here our label is now a sentence and so it's now in the form of natural language. So unsupervised learning in this set up, it's a type of learning where here we have unlabeled training data and our goal now is to learn some underlying hidden structure of the data. Right, so an example of this can be something like clustering which you guys might have seen before where here the goal is to find groups within the data that are similar through some type of metric. For example, K means clustering. Another example of an unsupervised learning task is a dimensionality reduction. So in this problem want to find axes along which our training data has the most variation, and so these axes are part of the underlying structure of the data. And then we can use this to reduce of dimensionality of the data such that the data has significant variation among each of the remaining dimensions. Right, so this example here we start off with data in three dimensions and we're going to find two axes of variation in this case and reduce our data projected down to 2D. Another example of unsupervised learning is learning feature representations for data. We've seen how to do this in supervised ways before where we used the supervised loss, for example classification. Where we have the classification label. We have something like a Softmax loss And we can train a neural network where we can interpret activations for example our FC7 layers as some kind of future representation for the data. And in an unsupervised setting, for example here autoencoders which we'll talk more about later In this case our loss is now trying to reconstruct the input data to basically, you have a good reconstruction of our input data and use this to learn features. So we're learning a feature representation without using any additional external labels. And finally another example of unsupervised learning is density estimation where in this case we want to estimate the underlying distribution of our data. So for example in this top case over here, we have points in 1-d and we can try and fit a Gaussian into this density and in this bottom example over here it's 2D data and here again we're trying to estimate the density and we can model this density. We want to fit a model such that the density is higher where there's more points concentrated. And so to summarize the differences in unsupervised learning which we've looked a lot so far, we want to use label data to learn a function mapping from X to Y and an unsupervised learning we use no labels and instead we try to learn some underlying hidden structure of the data, whether this is grouping, acts as a variation or underlying density estimation. And unsupervised learning is a huge and really exciting area of research and and some of the reasons are that training data is really cheap, it doesn't use labels so we're able to learn from a lot of data at one time and basically utilize a lot more data than if we required annotating or finding labels for data. And unsupervised learning is still relatively unsolved research area by comparison. There's a lot of open problems in this, but it also, it holds the potential of if you're able to successfully learn and represent a lot of the underlying structure in the data then this also takes you a long way towards the Holy Grail of trying to understand the structure of the visual world. So that's a little bit of kind of a high-level big picture view of unsupervised learning. And today will focus more specifically on generative models which is a class of models for unsupervised learning where given training data our goal is to try and generate new samples from the same distribution. Right, so we have training data over here generated from some distribution P data and we want to learn a model, P model to generate samples from the same distribution and so we want to learn P model to be similar to P data. And generative models address density estimations. So this problem that we saw earlier of trying to estimate the underlying distribution of your training data which is a core problem in unsupervised learning. And we'll see that there's several flavors of this. We can use generative models to do explicit density estimation where we're going to explicitly define and solve for our P model or we can also do implicit density estimation where in this case we'll learn a model that can produce samples from P model without explicitly defining it. So, why do we care about generative models? Why is this a really interesting core problem in unsupervised learning? Well there's a lot of things that we can do with generative models. If we're able to create realistic samples from the data distributions that we want we can do really cool things with this, right? We can generate just beautiful samples to start with so on the left you can see a completely new samples of just generated by these generative models. Also in the center here generated samples of images we can also do tasks like super resolution, colorization so hallucinating or filling in these edges with generated ideas of colors and what the purse should look like. We can also use generative models of time series data for simulation and planning and so this will be useful in for reinforcement learning applications which we'll talk a bit more about reinforcement learning in a later lecture. And training generative models can also enable inference of latent representations. Learning latent features that can be useful as general features for downstream tasks. So if we look at types of generative models these can be organized into the taxonomy here where we have these two major branches that we talked about, explicit density models and implicit density models. And then we can also get down into many of these other sub categories. And well we can refer to this figure is adapted from a tutorial on GANs from Ian Goodfellow and so if you're interested in some of these different taxonomy and categorizations of generative models this is a good resource that you can take a look at. But today we're going to discuss three of the most popular types of generative models that are in use and in research today. And so we'll talk first briefly about pixelRNNs and CNNs And then we'll talk about variational autoencoders. These are both types of explicit density models. One that's using a tractable density and another that's using an approximate density And then we'll talk about generative adversarial networks, GANs which are a type of implicit density estimation. So let's first talk about pixelRNNs and CNNs. So these are a type of fully visible belief networks which are modeling a density explicitly so in this case what they do is we have this image data X that we have and we want to model the probability or likelihood of this image P of X. Right and so in this case, for these kinds of models, we use the chain rule to decompose this likelihood into a product of one dimensional distribution. So we have here the probability of each pixel X I conditioned on all previous pixels X1 through XI - 1. and your likelihood all right, your joint likelihood of all the pixels in your image is going to be the product of all of these pixels together, all of these likelihoods together. And then once we define this likelihood, in order to train this model we can just maximize the likelihood of our training data under this defined density. So if we look at this this distribution over pixel values right, we have this P of XI given all the previous pixel values, well this is a really complex distribution. So how can we model this? Well we've seen before that if we want to have complex transformations we can do these using neural networks. Neural networks are a good way to express complex transformations. And so what we'll do is we'll use a neural network to express this complex function that we have of the distribution. And one thing you'll see here is that, okay even if we're going to use a neural network for this another thing we have to take care of is how do we order the pixels. Right, I said here that we have a distribution for P of XI given all previous pixels but what does all previous the pixels mean? So we'll take a look at that. So PixelRNN was a model proposed in 2016 that basically defines a way for setting up and optimizing this problem and so how this model works is that we're going to generate pixels starting in a corner of the image. So we can look at this grid as basically the pixels of your image and so what we're going to do is start from the pixel in the upper left-hand corner and then we're going to sequentially generate pixels based on these connections from the arrows that you can see here. And each of the dependencies on the previous pixels in this ordering is going to be modeled using an RNN or more specifically an LSTM which we've seen before in lecture. Right so using this we can basically continue to move forward just moving down a long is diagonal and generating all of these pixel values dependent on the pixels that they're connected to. And so this works really well but the drawback here is that this sequential generation, right, so it's actually quite slow to do this. You can imagine you know if you're going to generate a new image instead of all of these feed forward networks that we see, we've seen with CNNs. Here we're going to have to iteratively go through and generate all these images, all these pixels. So a little bit later, after a pixelRNN, another model called pixelCNN was introduced. And this has very similar setup as pixelCNN and we're still going to do this image generation starting from the corner of the of the image and expanding outwards but the difference now is that now instead of using 255 00:12:43,074 --> 00:12:45,480 an RNN to model all these dependencies we're going to use the CNN instead. And we're now going to use a CNN over a a context region that you can see here around in the particular pixel that we're going to generate now. Right so we take the pixels around it, this gray area within the region that's already been generated and then we can pass this through a CNN and use that to generate our next pixel value. And so what this is going to give is this is going to give This is a CNN, a neural network at each pixel location right and so the output of this is going to be a soft max loss over the pixel values here. In this case we have a 0 to 255 and then we can train this by maximizing the likelihood of the training images. Right so we say that basically we want to take a training image we're going to do this generation process and at each pixel location we have the ground truth training data image value that we have here and this is a quick basically the label or the the the classification label that we want our pixel to be which of these 255 values and we can train this using a Softmax loss. Right and so basically the effect of doing this is that we're going to maximize the likelihood of our training data pixels being generated. Okay any questions about this? Yes. [student's words obscured due to lack of microphone] Yeah, so the question is, I thought we were talking about unsupervised learning, why do we have basically a classification label here? The reason is that this loss, this output that we have is the value of the input training data. So we have no external labels, right? We didn't go and have to manually collect any labels for this, we're just taking our input data and saying that this is what we used for the last function. [student's words obscured due to lack of microphone] The question is, is this like bag of words? I would say it's not really bag of words, it's more saying that we want where we're outputting a distribution over pixel values at each location of our image right, and what we want to do is we want to maximize the likelihood of our input, our training data being produced, being generated. Right so, in that sense, this is why it's using our input data to create our loss. So using pixelCNN training is faster than pixelRNN because here now right at every pixel location we want to maximize the value of our, we want to maximize the likelihood of our training data showing up and so we have all of these values already right, just from our training data and so we can do this much faster but a generation time for a test time we want to generate a completely new image right, just starting from the corner and we're not, we're not trying to do any type of learning so in that generation time we still have to generate each of these pixel locations before we can generate the next location. And so generation time here it still slow even though training time is faster. Question. [student's words obscured due to lack of microphone] So the question is, is this training a sensitive distribution to what you pick for the first pixel? Yeah, so it is dependent on what you have as the initial pixel distribution and then everything is conditioned based on that. So again, how do you pick this distribution? So at training time you have these distributions from your training data and then at generation time you can just initialize this with either uniform or from your training data, however you want. And then once you have that everything else is conditioned based on that. Question. [student's words obscured due to lack of microphone] Yeah so the question is is there a way that we define this in this chain rule fashion instead of predicting all the pixels at one time? And so we'll see, we'll see models later that do do this, but what the chain rule allows us to do is it allows us to find this very tractable density that we can then basically optimize and do, directly optimizes likelihood Okay so these are some examples of generations from this model and so here on the left you can see generations where the training data is CIFAR-10, CIFAR-10 dataset. And so you can see that in general they are starting to capture statistics of natural images. You can see general types of blobs and kind of things that look like parts of natural images coming out. On the right here it's ImageNet, we can again see samples from here and these are starting to look like natural images but they're still not, there's still room for improvement. You can still see that there are differences obviously with regional training images and some of the semantics are not clear in here. So, to summarize this, pixelRNNs and CNNs allow you to explicitly compute likelihood P of X. It's an explicit density that we can optimize. And being able to do this also has another benefit of giving a good evaluation metric. You know you can kind of measure how good your samples are by this likelihood of the data that you can compute. And it's able to produce pretty good samples but it's still an active area of research and the main disadvantage of these methods is that the generation is sequential and so it can be pretty slow. And these kinds of methods have also been used for generating audio for example. And you can look online for some pretty interesting examples of this, but again the drawback is that it takes a long time to generate these samples. And so there's a lot of work, has been work since then on still on improving pixelCNN performance And so all kinds of different you know architecture changes add the loss function formulating this differently on different types of training tricks And so if you're interested in learning more about this you can look at some of these papers on PixelCNN and then other pixelCNN plus plus better improved version that came out this year. Okay so now we're going to talk about another type of generative models call variational autoencoders. And so far we saw that pixelCNNs defined a tractable density function, right, using this this definition and based on that we can optimize directly optimize the likelihood of the training data. So with variational autoencoders now we're going to define an intractable density function. We're now going to model this with an additional latent variable Z and we'll talk in more detail about how this looks. And so our data likelihood P of X is now basically has to be this integral right, taking the expectation over all possible values of Z. And so this now is going to be a problem. We'll see that we cannot optimize this directly. And so instead what we have to do is we have to derive and optimize a lower bound on the likelihood instead. Yeah, question. So the question is is what is Z? Z is a latent variable and I'll go through this in much more detail. So let's talk about some background first. Variational autoencoders are related to a type of unsupervised learning model called autoencoders. And so we'll talk little bit more first about autoencoders and what they are and then I'll explain how variational autoencoders are related and build off of this and allow you to generate data. So with autoencoders we don't use this to generate data, but it's an unsupervised approach for learning a lower dimensional feature representation from unlabeled training data. All right so in this case we have our input data X and then we're going to want to learn some features that we call Z. And then we'll have an encoder that's going to be a mapping, a function mapping from this input data to our feature Z. And this encoder can take many different forms right, they would generally use neural networks so originally these models have been around, autoencoders have been around for a long time. So in the 2000s we used linear layers of non-linearities, then later on we had fully connected deeper networks and then after that we moved on to using CNNs for these encoders. So we take our input data X and then we map this to some feature Z. And Z we usually have as, we usually specify this to be smaller than X and we perform basically dimensionality reduction because of that. So the question who has an idea of why do we want to do dimensionality reduction here? Why do we want Z to be smaller than X? Yeah. [student's words obscured due to lack of microphone] So the answer I heard is Z should represent the most important features in X and that's correct. So we want Z to be able to learn features that can capture meaningful factors of variation in the data. Right this makes them good features. So how can we learn this feature representation? Well the way autoencoders do this is that we train the model such that the features can be used to reconstruct our original data. So what we want is we want to have input data that we use an encoder to map it to some lower dimensional features Z. This is the output of the encoder network, and we want to be able to take these features that were produced based on this input data and then use a decoder a second network and be able to output now something of the same size dimensionality as X and have it be similar to X right so we want to be able to reconstruct the original data. And again for the decoder we are basically using same types of networks as encoders so it's usually a little bit symmetric and now we can use CNN networks for most of these. Okay so the process is going to be we're going to take our input data right we pass it through our encoder first which is going to be something for example like a four layer convolutional network and then we're going to pass it, get these features and then we're going to pass it through a decoder which is a four layer for example upconvolutional network and then get a reconstructed data out at the end of this. Right in the reason why we have a convolutional network for the encoder and an upconvolutional network for the decoder is because at the encoder we're basically taking it from this high dimensional input to these lower dimensional features and now we want to go the other way go from our low dimensional features back out to our high dimensional reconstructed input. And so in order to get this effect that we said we wanted before of being able to reconstruct our input data we'll use something like an L2 loss function. Right that basically just says let me make my pixels of my input data to be the same as my, my pixels in my reconstructed data to be the same as the pixels of my input data. An important thing to notice here, this relates back to a question that we had earlier, is that even though we have this loss function here, there's no, there's no external labels that are being used in training this. All we have is our training data that we're going to use both to pass through the network as well as to compute our loss function. So once we have this after training this model what we can do is we can throw away this decoder. All this was used was too to be able to produce our reconstruction input and be able to compute our loss function. And we can use the encoder that we have which produces our feature mapping and we can use this to initialize a supervised model. Right and so for example we can now go from this input to our features and then have an additional classifier network on top of this that now we can use to output a class label for example for classification problem we can have external labels from here and use our standard loss functions like Softmax. And so the value of this is that we basically were able to use a lot of unlabeled training data to try and learn good general feature representations. Right, and now we can use this to initialize a supervised learning problem where sometimes we don't have so much data we only have small data. And we've seen in previous homeworks and classes that with small data it's hard to learn a model, right? You can have over fitting and all kinds of problems and so this allows you to initialize your model first with better features. Okay so we saw that autoencoders are able to reconstruct data and are able to, as a result, learn features to initialize, that we can use to initialize a supervised model. And we saw that these features that we learned have this intuition of being able to capture factors of variation in the training data. All right so based on this intuition of okay these, we can have this latent this vector Z which has factors of variation in our training data. Now a natural question is well can we use a similar type of setup to generate new images? And so now we will talk about variational autoencoders which is a probabillstic spin on autoencoders that will let us sample from the model in order to generate new data. Okay any questions on autoencoders first? Okay, so variational autoencoders. All right so here we assume that our training data that we have X I from one to N is generated from some underlying, unobserved latent representation Z. Right, so it's this intuition that Z is some vector right which element of Z is capturing how little or how much of some factor of variation that we have in our training data. Right so the intuition is, you know, maybe these could be something like different kinds of attributes. Let's say we're trying to generate faces, it could be how much of a smile is on the face, it could be position of the eyebrows hair orientation of the head. These are all possible types of latent factors that could be learned. Right, and so our generation process is that we're going to sample from a prior over Z. Right so for each of these attributes for example, you know, how much smile that there is, we can have a prior over what sort of distribution we think that there should be for this so, a gaussian is something that's a natural prior that we can use for each of these factors of Z and then we're going to generate our data X by sampling from a conditional, conditional distribution P of X given Z. So we sample Z first, we sample a value for each of these latent factors and then we'll use that and sample our image X from here. And so the true parameters of this generation process are theta, theta star right? So we have the parameters of our prior and our conditional distributions and what we want to do is in order to have a generative model be able to generate new data we want to estimate these parameters of our true parameters Okay so let's first talk about how should we represent this model. All right, so if we're going to have a model for this generator process, well we've already said before that we can choose our prior P of Z to be something simple. Something like a Gaussian, right? And this is the reasonable thing to choose for for latent attributes. Now for our conditional distribution P of X given Z this is much more complex right, because we need to use this to generate an image and so for P of X given Z, well as we saw before, when we have some type of complex function that we want to represent we can represent this with a neural network. And so that's a natural choice for let's try and model P of X given Z with a neural network. And we're going to call this the decoder network. Right, so we're going to think about taking some latent representation and trying to decode this into the image that it's specifying. So now how can we train this model? Right, we want to be able to train this model so that we can learn an estimate of these parameters. So if we remember our strategy from training generative models, back from are fully visible belief networks, our pixelRNNs and CNNs, a straightforward natural strategy is to try and learn these model parameters in order to maximize the likelihood of the training data. Right, so we saw earlier that in this case, with our latent variable Z, we're going to have to write out P of X taking expectation over all possible values of Z which is continuous and so we get this expression here. Right so now we have it with this latent Z and now if we're going to, if you want to try and maximize its likelihood, well what's the problem? Can we just take this take gradients and maximize this likelihood? [student's words obscured due to lack of microphone] Right, so this integral is not going to be tractable, that's correct. So let's take a look at this in a little bit more detail. Right, so we have our data likelihood term here. And the first time is P of Z. And here we already said earlier, we can just choose this to be a simple Gaussian prior, so this is fine. P of X given Z, well we said we were going to specify a decoder neural network. So given any Z, we can get P of X given Z from here. It's the output of our neural network. But then what's the problem here? Okay this was supposed to be a different unhappy face but somehow I don't know what happened, in the process of translation, it turned into a crying black ghost but what this is symbolizing is that basically if we want to compute P of X given Z for every Z this is now intractable right, we cannot compute this integral. So data likelihood is intractable and it turns out that if we look at other terms in this model if we look at our posterior density, So P of our posterior of Z given X, then this is going to be P of X given Z times P of Z over P of X by Bayes' rule and this is also going to be intractable, right. We have P of X given Z is okay, P of Z is okay, but we have this P of X our likelihood which has the integral and it's intractable. So we can't directly optimizes this. but we'll see that a solution, a solution that will enable us to learn this model is if in addition to using a decoder network defining this neural network to model P of X given Z. If we now define an additional encoder network Q of Z given X we're going to call this an encoder because we want to turn our input X into, get the likelihood of Z given X, we're going to encode this into Z. And defined this network to approximate the P of Z given X. Right this was posterior density term now is also intractable. If we use this additional network to approximate this then we'll see that this will actually allow us to derive a lower bound on the data likelihood that is tractable and which we can optimize. Okay so first just to be a little bit more concrete about these encoder and decoder networks that I specified, in variational autoencoders we want the model probabilistic generation of data. So in autoencoders we already talked about this concept of having an encoder going from input X to some feature Z and a decoder network going from Z back out to some image X. And so here we go to again have an encoder network and a decoder network but we're going to make these probabilistic. So now our encoder network Q of Z given X with parameters phi are going to output a mean and a diagonal covariance and from here, this will be the direct outputs of our encoder network and the same thing for our decoder network which is going to start from Z and now it's going to output the mean and the diagonal covariance of some X, same dimension as the input given Z And then this decoder network has different parameters theta. And now in order to actually get our Z and our, This should be Z given X and X given Z. We'll sample from these distributions. So now our encoder and our decoder network are producing distributions over Z and X respectively and will sample from this distribution in order to get a value from here. So you can see how this is taking us on the direction towards being able to sample and generate new data. And just one thing to note is that these encoder and decoder networks, you'll also hear different terms for them. The encoder network can also be kind of recognition or inference network because we're trying to form inference of this latent representation of Z given X and then for the decoder network, this is what we'll use to perform generation. Right so you also hear generation network being used. Okay so now equipped with our encoder and decoder networks, let's try and work out the data likelihood again. and we'll use the log of the data likelihood here. So we'll see that if we want the log of P of X right we can write this out as like a P of X but take the expectation with respect to Z. So Z samples from our distribution of Q of Z given X that we've now defined using the encoder network. And we can do this because P of X doesn't depend on Z. Right 'cause Z is not part of that. And so we'll see that taking the expectation with respect to Z is going to come in handy later on. Okay so now from this original expression we can now expand it out to be log of P of X given Z, P of Z over P of Z given X using Bayes' rule. And so this is just directly writing this out. And then taking this we can also now multiply it by a constant. Right, so Q of Z given X over Q of Z given X. This is one we can do this. It doesn't change it but it's going to be helpful later on. So given that what we'll do is we'll write it out into these three separate terms. And you can work out this math later on by yourself but it's essentially just using logarithm rules taking all of these terms that we had in the line above and just separating it out into these three different terms that will have nice meanings. Right so if we look at this, the first term that we get separated out is log of P given X and then expectation of log of P given X and then we're going to have two KL terms, right. This is basically KL divergence term to say how close these two distributions are. So how close is a distribution Q of Z given X to P of Z. So it's just the, it's exactly this expectation term above. And it's just a distance metric for distributions. And so we'll see that, right, we saw that these are nice KL terms that we can write out. And now if we look at these three terms that we have here, the first term is P of X given Z, which is provided by our decoder network. And we're able to compute an estimate of these term through sampling and we'll see that we can do a sampling that's differentiable through something called the re-parametrization trick which is a detail that you can look at this paper if you're interested. But basically we can now compute this term. And then these KL terms, the second KL term is a KL between two Gaussians, so our Q of Z given X, remember our encoder produced this distribution which had a mean and a covariance, it was a nice Gaussian. And then also our prior P of Z which is also a Gaussian. And so this has a nice, when you have a KL of two Gaussians you have a nice closed form solution that you can have. And then this third KL term now, this is a KL of Q given X with a P of Z given X. But we know that P of Z given X was this intractable posterior that we saw earlier, right? That we didn't want to compute that's why we had this approximation using Q. And so this term is still is a problem. But one thing we do know about this term is that KL divergence, it's a distance between two distributions is always greater than or equal to zero by definition. And so what we can do with this is that, well what we have here, the two terms that we can work nicely with, this is a, this is a tractable lower bound which we can actually take gradient of and optimize. P of X given Z is differentiable and the KL terms are also, the close form solution is also differentiable. And this is a lower bound because we know that the KL term on the right, the ugly one is greater than or equal it zero. So we have a lower bound. And so what we'll do to train a variational autoencoder is that we take this lower bound and we instead optimize and maximize this lower bound instead. So we're optimizing a lower bound on the likelihood of our data. So that means that our data is always going to have a likelihood that's at least as high as this lower bound that we're maximizing. And so we want to find the parameters theta, estimate parameters theta and phi that allows us to maximize this. And then one last sort of intuition about this lower bound that we have is that this first term is expectation over all samples of Z sampled from passing our X through the encoder network sampling Z taking expectation over all of these samples of likelihood of X given Z and so this is a reconstruction, right? This is basically saying, if I want this to be big I want this likelihood P of X given Z to be high, so it's kind of like trying to do a good job reconstructing the data. So similar to what we had from our autoencoder before. But the second term here is saying make this KL small. Make our approximate posterior distribution close to our prior distribution. And this basically is saying that well we want our latent variable Z to be following this, have this distribution type, distribution shape that we would like it to have. Okay so any questions about this? I think this is a lot of math that if you guys are interested you should go back and kind of work through all of the derivations yourself. Yeah. [student's words obscured due to lack of microphone] So the question is why do we specify the prior and the latent variables as Gaussian? And the reason is that well we're defining some sort of generative process right, of sampling Z first and then sampling X first. And defining it as a Gaussian is a reasonable type of prior that we can say makes sense for these types of latent attributes to be distributed according to some sort of Gaussian, and then this lets us now then optimize our model. Okay, so we talked about how we can deride this lower bound and now let's put this all together and walk through the process of the training of the AE. Right so here's the bound that we want to optimize, to maximize. And now for a forward pass. We're going to proceed in the following manner. We have our input data X, so we'll a mini batch of input data. And then we'll pass it through our encoder network so we'll get Q of Z given X. And from this Q of Z given X, this'll be the terms that we use to compute the KL term. And then from here we'll sample Z from this distribution of Z given X so we have a sample of the latent factors that we can infer from X. And then from here we're going to pass a Z through another, our second decoder network. And from the decoder network we'll get this output for the mean and variance on our distribution for X given Z and then finally we can sample now our X given Z from this distribution and here this will produce some sample output. And when we're training we're going to take this distribution and say well our loss term is going to be log of our training image pixel values given Z. So our loss functions going to say let's maximize the likelihood of this original input being reconstructed. And so now for every mini batch of input we're going to compute this forward pass. Get all these terms that we need and then this is all differentiable so then we just backprop though all of this and then get our gradient, we update our model and we use this to continuously update our parameters, our generator and decoder network parameters theta and phi in order to maximize the likelihood of the trained data. Okay so once we've trained our VAE, so now to generate data, what we can do is we can use just the decoder network. All right, so from here we can sample Z now, instead of sampling Z from this posterior that we had during training, while during generation we sample from our true generative process. So we sample from our prior that we specify. And then we're going to then sample our data X from here. And we'll see that this can produce, in this case, train on MNIST, these are samples of digits generated from a VAE trained on MNIST. And you can see that, you know, we talked about this idea of Z representing these latent factors where we can bury Z right according to our sample from different parts of our prior and then get different kind of interpretable meanings from here. So here we can see that this is the data manifold for two dimensional Z. So if we have a two dimensional Z and we take Z and let's say some range from you know, from different percentiles of the distribution, and we vary Z1 and we vary Z2, then you can see how the image generated from every combination of Z1 and Z2 that we have here, you can see it's transitioning smoothly across all of these different variations. And you know our prior on Z was, it was diagonal, so we chose this in order to encourage this to be independent latent variables that can then encode interpretable factors of variation. So because of this now we'll have different dimensions of Z, encoding different interpretable factors of variation. So, in this example train now on Faces, we'll see as we vary Z1, going up and down, you'll see the amount of smile changing. So from a frown at the top to like a big smile at the bottom and then as we go vary Z2, from left to right, you can see the head pose changing. From one direction all the way to the other. And so one additional thing I want to point out is that as a result of doing this, these Z variables are also good feature representations. Because they encode how much of these different these different interpretable semantics that we have. And so we can use our Q of Z given X, the encoder that we've learned and give it an input images X, we can map this to Z and use the Z as features that we can use for downstream tasks like supervision, or like classification or other tasks. Okay so just another couple of examples of data generated from VAEs. So on the left here we have data generated on CIFAR-10, trained on CIFAR-10, and then on the right we have data trained and generated on Faces. And we'll see so we can see that in general VAEs are able to generate recognizable data. One of the main drawbacks of VAEs is that they tend to still have a bit of a blurry aspect to them. You can see this in the faces and so this is still an active area of research. Okay so to summarize VAEs, they're a probabilistic spin on traditional autoencoders. So instead of deterministically taking your input X and going to Z, feature Z and then back to reconstructing X, now we have this idea of distributions and sampling involved which allows us to generate data. And in order to train this, VAEs are defining an intractable density. So we can derive and optimize a lower bound, a variational lower bound, so variational means basically using approximations to handle these types of intractable expressions. And so this is why this is called a variational autoencoder. And so some of the advantages of this approach is that VAEs are, they're a principled approach to generative models and they also allow this inference query so being able to infer things like Q of Z given X. That we said could be useful feature representations for other tasks. So disadvantages of VAEs are that while we're maximizing the lower bound of the likelihood, which is okay like you know in general this is still pushing us in the right direction and there's more other theoretical analysis of this. So you know, it's doing okay, but it's maybe not still as direct an optimization and evaluation as the pixel RNNs and CNNs that we saw earlier, but which had, and then, also the VAE samples are tending to be a little bit blurrier and of lower quality compared to state of the art samples that we can see from other generative models such as GANs that we'll talk about next. And so VAEs now are still, they're still an active area of research. People are working on more flexible approximations, so richer approximate posteriors, so instead of just a diagonal Gaussian some richer functions for this. And then also, another area that people have been working on is incorporating more structure in these latent variables. So now we had all of these independent latent variables but people are working on having modeling structure in here, groupings, other types of structure. Okay, so yeah, question. [student's words obscured due to lack of microphone] Yeah, so the question is we're deciding the dimensionality of the latent variable. Yeah, that's something that you specify. Okay, so we've talked so far about pixelCNNs and VAEs and now we'll take a look at a third and very popular type of generative model called GANs. So the models that we've seen so far, pixelCNNs and RNNs define a tractable density function. And they optimize the likelihood of the trained data. And then VAEs in contrast to that now have this additional latent variable Z that they define in the generative process. And so having the Z has a lot of nice properties that we talked about, but they are also cause us to have this intractable density function that we can't optimize directly and so we derive and optimize a lower bound on the likelihood instead. And so now what if we just give up on explicitly modeling this density at all? And we say well what we want is just the ability to sample and to have nice samples from our distribution. So this is the approach that GANs take. So in GANs we don't work with an explicit density function, but instead we're going to take a game-theoretic approach and we're going to learn to generate from our training distribution through a set up of a two player game, and we'll talk about this in more detail. So, in the GAN set up we're saying, okay well what we want, what we care about is we want to be able to sample from a complex high dimensional training distribution. So if we think about well we want to produce samples from this distribution, there's no direct way that we can do this. We have this very complex distribution, we can't just take samples from here. So the solution that we're going to take is that we can, however, sample from simpler distributions. For example random noise, right? Gaussians are, these we can sample from. And so what we're going to do is we're going to learn a transformation from these simple distributions directly to the training distribution that we want. So the question, what can we used to represent this complex distribution? Neural network, I heard the answer. So when we want to model some kind of complex function or transformation we use a neural network. Okay so what we're going to do is we're going to take in the GAN set up, we're going to take some input which is a vector of some dimension that we specify of random noise and then we're going to pass this through a generator network, and then we're going to get as output directly a sample from the training distribution. So every input of random noise we want to correspond to a sample from the training distribution. And so the way we're going to train and learn this network is that we're going to look at this as a two player game. So we have two players, a generator network as well as an additional discriminator network that I'll show next. And our generator network is going to try to, as player one, it's going to try to fool the discriminator by generating real looking images. And then our second player, our discriminator network is then going to try to distinguish between real and fake images. So it wants to do as good a job as possible of trying to determine which of these images are counterfeit or fake images generated by this generator. Okay so what this looks like is, we have our random noise going to our generator network, generator network is generating these images that we're going to call, they're fake from our generator. And then we're going to also have real images that we take from our training set and then we want the discriminator to be able to distinguish between real and fake images. Output real and fake for each images. So the idea is if we're able to have a very good discriminator, we want to train a good discriminator, if it can do a good job of discriminating real versus fake, and then if our generator network is able to generate, if it's able to do well and generate fake images that can successfully fool this discriminator, then we have a good generative model. We're generating images that look like images from the training set. Okay, so we have these two players and so we're going to train this jointly in a minimax game formulation. So this minimax objective function is what we have here. We're going to take, it's going to be minimum over theta G our parameters of our generator network G, and maximum over parameter Zeta of our Discriminator network D, of this objective, right, these terms. And so if we look at these terms, what this is saying is well this first thing, expectation over data of log of D given X. This log of D of X is the discriminator output for real data X. This is going to be likelihood of real data being real from the data distribution P data. And then the second term here, expectation of Z drawn from P of Z, Z drawn from P of Z means samples from our generator network and this term D of G of Z that we have here is the output of our discriminator for generated fake data for our, what does the discriminator output of G of Z which is our fake data. And so if we think about this is trying to do, our discriminator wants to maximize this objective, right, it's a max over theta D such that D of X is close to one. It's close to real, it's high for the real data. And then D of G of X, what it thinks of the fake data on the left here is small, we want this to be close to zero. So if we're able to maximize this, this means discriminator is doing a good job of distinguishing between real and zero. Basically classifying between real and fake data. And then our generator, here we want the generator to minimize this objective such that D of G of Z is close to one. So if this D of G of Z is close to one over here, then the one minus side is small and basically we want to, if we minimize this term then, then it's having discriminator think that our fake data's actually real. So that means that our generator is producing real samples. Okay so this is the important objective of GANs to try and understand so are there any questions about this? [student's words obscured due to lack of microphone] I'm not sure I understand your question, can you, [student's words obscured due to lack of microphone] Yeah, so the question is is this basically trying to have the first network produce real looking images that our second network, the discriminator cannot distinguish between. Okay, so the question is how do we actually label the data or do the training for these networks. We'll see how to train the networks next. But in terms of like what is the data label basically, this is unsupervised, so there's no data labeling. But data generated from the generator network, the fake images have a label of basically zero or fake. And we can take training images that are real images and this basically has a label of one or real. So when we have, the loss function for our discriminator is using this. It's trying to output a zero for the generator images and a one for the real images. So there's no external labels. [student's words obscured due to lack of microphone] So the question is the label for the generator network will be the output for the discriminator network. The generator is not really doing, it's not really doing classifications necessarily. What it's objective is is here, D of G of Z, it wants this to be high. So given a fixed discriminator, it wants to learn the generator parameter such that this is high. So we'll take the fixed discriminator output and use that to do the backprop. Okay so in order to train this, what we're going to do is we're going to alternate between gradient ascent on our discriminator, so we're trying to learn theta beta to maximizing this objective. And then gradient descent on the generator. So taking gradient ascent on these parameters theta G such that we're minimizing this and this objective. And here we are only taking this right part over here because that's the only part that's dependent on theta G parameters. Okay so this is how we can train this GAN. We can alternate between training our discriminator and our generator in this game, each trying to fool the other or generator trying to fool the discriminator. But one thing that is important to note is that in practice this generator objective as we've just defined actually doesn't work that well. And the reason for this is we have to look at the loss landscape. So if we look at the loss landscape over here for D of G of X, if we apply here one minus D of G of X which is what we want to minimize for the generator, it has this shape here. So we want to minimize this and it turns out the slope of this loss is actually going to be higher towards the right. High when D of G of Z is closer to one. So that means that when our generator is doing a good job of fooling the discriminator, we're going to have a high gradient, more higher gradient terms. And on the other hand when we have bad samples, our generator has not learned a good job yet, it's not good at generating yet, then this is when the discriminator can easily tell it's now closer to this zero region on the X axis. Then here the gradient's relatively flat. And so what this actually means is that our our gradient signal is dominated by region where the sample is already pretty good. Whereas we actually want it to learn a lot when the samples are bad, right? These are training samples that we want to learn from. And so in order to, so this basically makes it hard to learn and so in order to improve learning, what we're going to do is define a different, slightly different objective function for the gradient. Where now we're going to do gradient ascent instead. And so instead of minimizing the likelihood of our discriminator being correct, which is what we had earlier, now we'll kind of flip it and say let's maximize the likelihood of our discriminator being wrong. And so this will produce this objective here of maximizing, maximizing log of D of G of X. And so, now basically we want to, there should be a negative sign here. But basically we want to now maximize this flip objective instead and what this now does is if we plot this function on the right here, then we have a high gradient signal in this region on the left where we have bad samples, and now the flatter region is to the right where we would have good samples. So now we're going to learn more from regions of bad samples. And so this has the same objective of fooling the discriminator but it actually works much better in practice and for a lot of work on GANs that are using these kind of vanilla GAN formulation is actually using this objective. Okay so just an aside on that is that jointly training these two networks is challenging and can be unstable. So as we saw here, like we're alternating between training a discriminator and training a generator. This type of alternation is, basically it's hard to learn two networks at once and there's also this issue of depending on what our loss landscape looks at, it can affect our training dynamics. So an active area of research still is how can we choose objectives with better loss landscapes that can help training and make it more stable? Okay so now let's put this all together and look at the full GAN training algorithm. So what we're going to do is for each iteration of training we're going to first train the generation, train the discriminator network a bit and then train the generator network. So for k steps of training the discriminator network we'll sample a mini batch of noise samples from our noise prior Z and then also sample a mini batch of real samples from our training data X. So what we'll do is we'll pass the noise through our generator, we'll get our fake images out. So we have a mini batch of fake images and mini batch of real images. And then we'll pick a gradient step on the discriminator using this mini batch, our fake and our real images and then update our discriminator parameters. And use this and do this a certain number of iterations to train the discriminator for a bit basically. And then after that we'll go to our second step which is training the generator. And so here we'll sample just a mini batch of noise samples. We'll pass this through our generator and then now we want to do backpop on this to basically optimize our generator objective that we saw earlier. So we want to have our generator fool our discriminator as much as possible. And so we're going to alternate between these two steps of taking gradient steps for our discriminator and for the generator. And I said for k steps up here, for training the discriminator and so this is kind of a topic of debate. Some people think just having one iteration of discriminator one type of discriminator, one type of generator is best. Some people think it's better to train the discriminator for a little bit longer before switching to the generator. There's no real clear rule and it's something that people have found different things to work better depending on the problem. And one thing I want to point out is that there's been a lot of recent work that alleviates this problem and makes it so you don't have to spend so much effort trying to balance how the training of these two networks. It'll have more stable training and give better results. And so Wasserstein GAN is an example of a paper that was an important work towards doing this. Okay so looking at the whole picture we've now trained, we have our network setup, we've trained both our generator network and our discriminator network and now after training for generation, we can just take our generator network and use this to generate new images. So we just take noise Z and pass this through and generate fake images from here. Okay and so now let's look at some generated samples from these GANs. So here's an example of trained on MNIST and then on the right on Faces. And for each of these you can also see, just for visualization the closest, on the right, the nearest neighbor from the training set to the column right next to it. And so you can see that we're able to generate very realistic samples and it never directly memorizes the training set. And here are some examples from the original GAN paper on CIFAR images. And these are still fairly, not such good quality yet, these were, the original work is from 2014, so these are some older, simpler networks. And these were using simple, fully connected networks. And so since that time there's been a lot of work on improving GANs. One example of a work that really took a big step towards improving the quality of samples is this work from Alex Radford in ICLR 2016 on adding convolutional architectures to GANs. In this paper there was a whole set of guidelines on architectures for helping GANs to produce better samples. So you can look at this for more details. This is an example of a convolutional architecture that they're using which is going from our input Z noise vector Z and transforming this all the way to the output sample. So now from this large convolutional architecture we'll see that the samples from this model are really starting to look very good. So this is trained on a dataset of bedrooms and we can see all kinds of very realistic fancy looking bedrooms with windows and night stands and other furniture around there so these are some really pretty samples. And we can also try and interpret a little bit of what these GANs are doing. So in this example here what we can do is we can take two points of Z, two different random noise vectors and let's just interpolate between these points. And each row across here is an interpolation from one random noise Z to another random noise vector Z and you can see that as it's changing, it's smoothly interpolating the image as well all the way over. And so something else that we can do is we can see that, well, let's try to analyze further what these vectors Z mean, and so we can try and do vector math on here. So what this experiment does is it says okay, let's take some images of smiling, samples of smiling women images and then let's take some samples of neutral women and then also some samples of neutral men. And so let's try and do take the average of the Z vectors that produced each of these samples and if we, Say we take this, mean vector for the smiling women, subtract the mean vector for the neutral women and add the mean vector for the neutral man, what do we get? And we get samples of smiling man. So we can take the Z vector produced there, generate samples and get samples of smiling men. And we can have another example of this. Of glasses man minus no glasses man and plus glasses women. And get women with glasses. So here you can see that basically the Z has this type of interpretability that you can use this to generate some pretty cool examples. Okay so this year, 2017 has really been the year of the GAN. There's been tons and tons of work on GANs and it's really sort of exploded and gotten some really cool results. So on the left here you can see people working on better training and generation. So we talked about improving the loss functions, more stable training and this was able to get really nice generations here of different types of architectures on the bottom here really crisp high resolution faces. With GANs you can also do, there's also been models on source to try to domain transfer and conditional GANs. And so here, this is an example of source to try to get domain transfer where, for example in the upper part here we are trying to go from source domain of horses to an output domain of zebras. So we can take an image of horses and train a GAN such that the output is going to be the same thing but now zebras in the same image setting as the horses and go the other way around. We can transform apples into oranges. And also the other way around. We can also use this to do photo enhancement. So producing these, really taking a standard photo and trying to make really nice, as if you had, pretending that you have a really nice expensive camera. That you can get the nice blur effects. On the bottom here we have scene changing, so transforming an image of Yosemite from the image in winter time to the image in summer time. And there's really tons of applications. So on the right here there's more. There's also going from a text description and having a GAN that's now conditioned on this text description and producing an image. So there's something here about a small bird with a pink breast and crown and now we're going to generate images of this. And there's also examples down here of filling in edges. So given conditions on some sketch that we have, can we fill in a color version of what this would look like. Can we take a Google, a map grid and put something that looks like Google Earth on, and turn it into something that looks like Google Earth. Go in and hallucinate all of these buildings and trees and so on. And so there's lots of really cool examples of this. And there's also this website for pics to pics which did a lot of these kind of conditional GAN type examples. I encourage you to go look at for more interesting applications that people have done with GANs. And in terms of research papers there's also there's a huge number of papers about GANs this year now. There's a website called the GAN Zoo that kind of is trying to compile a whole list of these. And so here this has only taken me from A through C on the left here and through like L on the right. So it won't even fit on the slide. There's tons of papers as well that you can look at if you're interested. And then one last pointer is also for tips and tricks for training GANs, here's a nice little website that has pointers if you're trying to train these GANs in practice. Okay, so summary of GANs. GANs don't work with an explicit density function. Instead we're going to represent this implicitly through samples and they take a game-theoretic approach to training so we're going to learn to generate from our training distribution through a two player game setup. And the pros of GANs are that they're really having gorgeous state of the art samples and you can do a lot with these. The cons are that they are trickier and more unstable to train, we're not just directly optimizing a one objective function that we can just do backpop and train easily. Instead we have these two networks that we're trying to balance training with so it can be a bit more unstable. And we also can lose out on not being able to do some of the inference queries, P of X, P of Z given X that we had for example in our VAE. And GANs are still an active area of research, this is a relatively new type of model that we're starting to see a lot of and you'll be seeing a lot more of. And so people are still working now on better loss functions more stable training, so Wasserstein GAN for those of you who are interested is basically an improvement in this direction. That now a lot of people are also using and basing models off of. There's also other works like LSGAN, Least Square's GAN, Least Square's GAN and others. So you can look into this more. And a lot of times for these new models in terms of actually implementing this, they're not necessarily big changes. They're different loss functions that you can change a little bit and get like a big improvement in training. And so this is, some of these are worth looking into and you'll also get some practice on your homework assignment. And there's also a lot of work on different types of conditional GANs and GANs for all kinds of different problem setups and applications. Okay so a recap of today. We talked about generative models. We talked about three of the most common kinds of generative models that people are using and doing research on today. So we talked first about pixelRNN and pixelCNN, which is an explicit density model. It optimizes the exact likelihood and it produces good samples but it's pretty inefficient because of the sequential generation. We looked at VAE which optimizes a variational or lower bound on the likelihood and this also produces useful a latent representation. You can do inference queries. But the example quality is still not the best. So even though it has a lot of promise, it's still a very active area of research and has a lot of open problems. And then GANs we talked about is a game-theoretic approach for training and it's what currently achieves the best state of the art examples. But it can also be tricky and unstable to train and it loses out a bit on the inference queries. And so what you'll also see is a lot of recent work on combinations of these kinds of models. So for example adversarial autoencoders. Something like a VAE trained with an additional adversarial loss on top which improves the sample quality. There's also things like pixelVAE is now a combination of pixelCNN and VAE so there's a lot of combinations basically trying to take the best of all these worlds and put them together. Okay so today we talked about generative models and next time we'll talk about reinforcement learning. Thanks.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_1_Introduction_to_Convolutional_Neural_Networks_for_Visual_Recognition.txt
- So welcome everyone to CS231n. I'm super excited to offer this class again for the third time. It seems that every time we offer this class it's growing exponentially unlike most things in the world. This is the third time we're teaching this class. The first time we had 150 students. Last year, we had 350 students, so it doubled. This year we've doubled again to about 730 students when I checked this morning. So anyone who was not able to fit into the lecture hall I apologize. But, the videos will be up on the SCPD website within about two hours. So if you weren't able to come today, then you can still check it out within a couple hours. So this class CS231n is really about computer vision. And, what is computer vision? Computer vision is really the study of visual data. Since there's so many people enrolled in this class, I think I probably don't need to convince you that this is an important problem, but I'm still going to try to do that anyway. The amount of visual data in our world has really exploded to a ridiculous degree in the last couple of years. And, this is largely a result of the large number of sensors in the world. Probably most of us in this room are carrying around smartphones, and each smartphone has one, two, or maybe even three cameras on it. So I think on average there's even more cameras in the world than there are people. And, as a result of all of these sensors, there's just a crazy large, massive amount of visual data being produced out there in the world each day. So one statistic that I really like to kind of put this in perspective is a 2015 study from CISCO that estimated that by 2017 which is where we are now that roughly 80% of all traffic on the internet would be video. This is not even counting all the images and other types of visual data on the web. But, just from a pure number of bits perspective, the majority of bits flying around the internet are actually visual data. So it's really critical that we develop algorithms that can utilize and understand this data. However, there's a problem with visual data, and that's that it's really hard to understand. Sometimes we call visual data the dark matter of the internet in analogy with dark matter in physics. So for those of you who have heard of this in physics before, dark matter accounts for some astonishingly large fraction of the mass in the universe, and we know about it due to the existence of gravitational pulls on various celestial bodies and what not, but we can't directly observe it. And, visual data on the internet is much the same where it comprises the majority of bits flying around the internet, but it's very difficult for algorithms to actually go in and understand and see what exactly is comprising all the visual data on the web. Another statistic that I like is that of Youtube. So roughly every second of clock time that happens in the world, there's something like five hours of video being uploaded to Youtube. So if we just sit here and count, one, two, three, now there's 15 more hours of video on Youtube. Google has a lot of employees, but there's no way that they could ever have an employee sit down and watch and understand and annotate every video. So if they want to catalog and serve you relevant videos and maybe monetize by putting ads on those videos, it's really crucial that we develop technologies that can dive in and automatically understand the content of visual data. So this field of computer vision is truly an interdisciplinary field, and it touches on many different areas of science and engineering and technology. So obviously, computer vision's the center of the universe, but sort of as a constellation of fields around computer vision, we touch on areas like physics because we need to understand optics and image formation and how images are actually physically formed. We need to understand biology and psychology to understand how animal brains physically see and process visual information. We of course draw a lot on computer science, mathematics, and engineering as we actually strive to build computer systems that implement our computer vision algorithms. So a little bit more about where I'm coming from and about where the teaching staff of this course is coming from. Me and my co-instructor Serena are both PHD students in the Stanford Vision Lab which is headed by professor Fei-Fei Li, and our lab really focuses on machine learning and the computer science side of things. I work a little bit more on language and vision. I've done some projects in that. And, other folks in our group have worked a little bit on the neuroscience and cognitive science side of things. So as a bit of introduction, you might be curious about how this course relates to other courses at Stanford. So we kind of assume a basic introductory understanding of computer vision. So if you're kind of an undergrad, and you've never seen computer vision before, maybe you should've taken CS131 which was offered earlier this year by Fei-Fei and Juan Carlos Niebles. There was a course taught last quarter by Professor Chris Manning and Richard Socher about the intersection of deep learning and natural language processing. And, I imagine a number of you may have taken that course last quarter. There'll be some overlap between this course and that, but we're really focusing on the computer vision side of thing, and really focusing all of our motivation in computer vision. Also concurrently taught this quarter is CS231a taught by Professor Silvio Savarese. And, CS231a really focuses is a more all encompassing computer vision course. It's focusing on things like 3D reconstruction, on matching and robotic vision, and it's a bit more all encompassing with regards to vision than our course. And, this course, CS231n, really focuses on a particular class of algorithms revolving around neural networks and especially convolutional neural networks and their applications to various visual recognition tasks. Of course, there's also a number of seminar courses that are taught, and you'll have to check the syllabus and course schedule for more details on those 'cause they vary a bit each year. So this lecture is normally given by Professor Fei-Fei Li. Unfortunately, she wasn't able to be here today, so instead for the majority of the lecture we're going to tag team a little bit. She actually recorded a bit of pre-recorded audio describing to you the history of computer vision because this class is a computer vision course, and it's very critical and important that you understand the history and the context of all the existing work that led us to these developments of convolutional neural networks as we know them today. I'll let virtual Fei-Fei take over [laughing] and give you a brief introduction to the history of computer vision. Okay let's start with today's agenda. So we have two topics to cover one is a brief history of computer vision and the other one is the overview of our course CS 231 so we'll start with a very brief history of where vision comes from when did computer vision start and where we are today. The history the history of vision can go back many many years ago in fact about 543 million years ago. What was life like during that time? Well the earth was mostly water there were a few species of animals floating around in the ocean and life was very chill. Animals didn't move around much there they don't have eyes or anything when food swims by they grab them if the food didn't swim by they just float around but something really remarkable happened around 540 million years ago. From fossil studies zoologists found out within a very short period of time — ten million years — the number of animal species just exploded. It went from a few of them to hundreds of thousands and that was strange — what caused this? There were many theories but for many years it was a mystery evolutionary biologists call this evolution's Big Bang. A few years ago an Australian zoologist called Andrew Parker proposed one of the most convincing theory from the studies of fossils he discovered around 540 million years ago the first animals developed eyes and the onset of vision started this explosive speciation phase. Animals can suddenly see; once you can see life becomes much more proactive. Some predators went after prey and prey have to escape from predators so the evolution or onset of vision started a evolutionary arms race and animals had to evolve quickly in order to survive as a species so that was the beginning of vision in animals after 540 million years vision has developed into the biggest sensory system of almost all animals especially intelligent animals in humans we have almost 50% of the neurons in our cortex involved in visual processing it is the biggest sensory system that enables us to survive, work, move around, manipulate things, communicate, entertain, and many things. The vision is really important for animals and especially intelligent animals. So that was a quick story of biological vision. What about humans, the history of humans making mechanical vision or cameras? Well one of the early cameras that we know today is from the 1600s, the Renaissance period of time, camera obscura and this is a camera based on pinhole camera theories. It's very similar to, it's very similar to the to the early eyes that animals developed with a hole that collects lights and then a plane in the back of the camera that collects the information and project the imagery. So as cameras evolved, today we have cameras everywhere this is one of the most popular sensors people use from smartphones to to other sensors. In the mean time biologists started studying the mechanism of vision. One of the most influential work in both human vision where animal vision as well as that inspired computer vision is the work done by Hubel and Wiesel in the 50s and 60s using electrophysiology. What they were asking, the question is "what was the visual processing mechanism like in primates, in mammals" so they chose to study cat brain which is more or less similar to human brain from a visual processing point of view. What they did is to stick some electrodes in the back of the cat brain which is where the primary visual cortex area is and then look at what stimuli makes the neurons in the in the back in the primary visual cortex of cat brain respond excitedly what they learned is that there are many types of cells in the, in the primary visual cortex part of the the cat brain but one of the most important cell is the simple cells they respond to oriented edges when they move in certain directions. Of course there are also more complex cells but by and large what they discovered is visual processing starts with simple structure of the visual world, oriented edges and as information moves along the visual processing pathway the brain builds up the complexity of the visual information until it can recognize the complex visual world. So the history of computer vision also starts around early 60s. Block World is a set of work published by Larry Roberts which is widely known as one of the first, probably the first PhD thesis of computer vision where the visual world was simplified into simple geometric shapes and the goal is to be able to recognize them and reconstruct what these shapes are. In 1966 there was a now famous MIT summer project called "The Summer Vision Project." The goal of this Summer Vision Project, I read: "is an attempt to use our summer workers effectively in a construction of a significant part of a visual system." So the goal is in one summer we're gonna work out the bulk of the visual system. That was an ambitious goal. Fifty years have passed; the field of computer vision has blossomed from one summer project into a field of thousands of researchers worldwide still working on some of the most fundamental problems of vision. We still have not yet solved vision but it has grown into one of the most important and fastest growing areas of artificial intelligence. Another person that we should pay tribute to is David Marr. David Marr was a MIT vision scientist and he has written an influential book in the late 70s about what he thinks vision is and how we should go about computer vision and developing algorithms that can enable computers to recognize the visual world. The thought process in his, in David Mars book is that in order to take an image and arrive at a final holistic full 3d representation of the visual world we have to go through several process. The first process is what he calls "primal sketch;" this is where mostly the edges, the bars, the ends, the virtual lines, the curves, the boundaries, are represented and this is very much inspired by what neuroscientists have seen: Hubel and Wiesel told us the early stage of visual processing has a lot to do with simple structures like edges. Then the next step after the edges and the curves is what David Marr calls "two-and-a-half d sketch;" this is where we start to piece together the surfaces, the depth information, the layers, or the discontinuities of the visual scene, and then eventually we put everything together and have a 3d model hierarchically organized in terms of surface and volumetric primitives and so on. So that was a very idealized thought process of what vision is and this way of thinking actually has dominated computer vision for several decades and is also a very intuitive way for students to enter the field of vision and think about how we can deconstruct the visual information. Another very important seminal group of work happened in the 70s where people began to ask the question "how can we move beyond the simple block world and start recognizing or representing real world objects?" Think about the 70s, it's the time that there's very little data available; computers are extremely slow, PCs are not even around, but computer scientists are starting to think about how we can recognize and represent objects. So in Palo Alto both at Stanford as well as SRI, two groups of scientists that propose similar ideas: one is called "generalized cylinder," one is called "pictorial structure." The basic idea is that every object is composed of simple geometric primitives; for example a person can be pieced together by generalized cylindrical shapes or a person can be pieced together by critical part in their elastic distance between these parts so either representation is a way to reduce the complex structure of the object into a collection of simpler shapes and their geometric configuration. These work have been influential for quite a few, quite a few years and then in the 80s David Lowe, here is another example of thinking how to reconstruct or recognize the visual world from simple world structures, this work is by David Lowe which he tries to recognize razors by constructing lines and edges and and mostly straight lines and their combination. So there was a lot of effort in trying to think what what is the tasks in computer vision in the 60s 70s and 80s and frankly it was very hard to solve the problem of object recognition; everything I've shown you so far are very audacious ambitious attempts but they remain at the level of toy examples or just a few examples. Not a lot of progress have been made in terms of delivering something that can work in real world. So as people think about what are the problems to solving vision one important question came around is: if object recognition is too hard, maybe we should first do object segmentation, that is the task of taking an image and group the pixels into meaningful areas. We might not know the pixels that group together is called a person, but we can extract out all the pixels that belong to the person from its background; that is called image segmentation. So here's one very early seminal work by Jitendra Malik and his student Jianbo Shi from Berkeley from using a graph theory algorithm for the problem of image segmentation. Here's another problem that made some headway ahead of many other problems in computer vision, which is face detection. Faces one of the most important objects to humans, probably the most important objects to humans, around the time of 1999 to 2000 machine learning techniques, especially statistical machine learning techniques start to gain momentum. These are techniques such as support vector machines, boosting, graphical models, including the first wave of neural networks. One particular work that made a lot of contribution was using AdaBoost algorithm to do real-time face detection by Paul Viola and Michael Jones and there's a lot to admire in this work. It was done in 2001 when computer chips are still very very slow but they're able to do face detection in images in near-real-time and after the publication of this paper in five years time, 2006, Fujifilm rolled out the first digital camera that has a real-time face detector in the in the camera so it was a very rapid transfer from basic science research to real world application. So as a field we continue to explore how we can do object recognition better so one of the very influential way of thinking in the late 90s til the first 10 years of 2000 is feature based object recognition and here is a seminal work by David Lowe called SIFT feature. The idea is that to match and the entire object for example here is a stop sign to another stop sight is very difficult because there might be all kinds of changes due to camera angles, occlusion, viewpoint, lighting, and just the intrinsic variation of the object itself but it's inspired to observe that there are some parts of the object, some features, that tend to remain diagnostic and invariant to changes so the task of object recognition began with identifying these critical features on the object and then match the features to a similar object, that's a easier task than pattern matching the entire object. So here is a figure from his paper where it shows that a handful, several dozen SIFT features from one stop sign are identified and matched to the SIFT features of another stop sign. Using the same building block which is features, diagnostic features in images, we have as a field has made another step forward and start to recognizing holistic scenes. Here is an example algorithm called Spatial Pyramid Matching; the idea is that there are features in the images that can give us clues about which type of scene it is, whether it's a landscape or a kitchen or a highway and so on and this particular work takes these features from different parts of the image and in different resolutions and put them together in a feature descriptor and then we do support vector machine algorithm on top of that. Similarly a very similar work has gained momentum in human recognition so putting together these features well we have a number of work that looks at how we can compose human bodies in more realistic images and recognize them. So one work is called the "histogram of gradients," another work is called "deformable part models," so as you can see as we move from the 60s 70s 80s towards the first decade of the 21st century one thing is changing and that's the quality of the pictures were no longer, with the Internet the the the growth of the Internet the digital cameras were having better and better data to study computer vision. So one of the outcome in the early 2000s is that the field of computer vision has defined a very important building block problem to solve. It's not the only problem to solve but in terms of recognition this is a very important problem to solve which is object recognition. I talked about object recognition all along but in the early 2000s we began to have benchmark data set that can enable us to measure the progress of object recognition. One of the most influential benchmark data set is called PASCAL Visual Object Challenge, and it's a data set composed of 20 object classes, three of them are shown here: train, airplane, person; I think it also has cows, bottles, cats, and so on; and the data set is composed of several thousand to ten thousand images per category and then the field different groups develop algorithm to test against the testing set and see how we have made progress. So here is a figure that shows from year 2007 to year 2012. The performance on detecting objects the 20 object in this image in a in a benchmark data set has steadily increased. So there was a lot of progress made. Around that time a group of us from Princeton to Stanford also began to ask a harder question to ourselves as well as our field which is: are we ready to recognize every object or most of the object in the world. It's also motivated by an observation that is rooted in machine learning which is that most of the machine learning algorithms it doesn't matter if it's graphical model, or support vector machine, or AdaBoost, is very likely to overfit in the training process and part of the problem is visual data is very complex because it's complex our models tend to have a high dimension a high dimension of input and have to have a lot of parameters to fit and when we don't have enough training data overfitting happens very fast and then we cannot generalize very well. So motivated by this dual reason, one is just want to recognize the world of all the objects, the other one is to come back the machine learning overcome the the machine learning bottleneck of overfitting, we began this project called ImageNet. We wanted to put together the largest possible dataset of all the pictures we can find, the world of objects, and use that for training as well as for benchmarking. So it was a project that took us about three years, lots of hard work; it basically began with downloading billions of images from the internet organized by the dictionary we called WordNet which is tens of thousands of object classes and then we have to use some clever crowd engineering trick a method using Amazon Mechanical Turk platform to sort, clean, label each of the images. The end result is a ImageNet of almost 15 million or 40 million plus images organized in twenty-two thousand categories of objects and scenes and this is the gigantic, probably the biggest dataset produced in the field of AI at that time and it began to push forward the algorithm development of object recognition into another phase. Especially important is how to benchmark the progress so starting 2009 the ImageNet team rolled out an international challenge called ImageNet Large-Scale Visual Recognition Challenge and for this challenge we put together a more stringent test set of 1.4 million objects across 1,000 object classes and this is to test the image classification recognition results for the computer vision algorithms. So here's the example picture and if an algorithm can output 5 labels and and top five labels includes the correct object in this picture then we call this a success. So here is a result summary of the ImageNet Challenge, of the image classification result from 2010 to 2015 so on x axis you see the years and the y axis you see the error rate. So the good news is the error rate is steadily decreasing to the point by 2012 the error rate is so low is on par with what humans can do and here a human I mean a single Stanford PhD student who spend weeks doing this task as if he were a computer participating in the ImageNet Challenge. So that's a lot of progress made even though we have not solved all the problems of object recognition which you'll learn about in this class but to go from an error rate that's unacceptable for real-world application all the way to on par being on par with humans in ImageNet challenge, the field took only a few years. And one particular moment you should notice on this graph is the the year 2012. In the first two years our error rate hovered around 25 percent but in 2012 the error rate was dropped more almost 10 percent to 16 percent even though now it's better but that drop was very significant and the winning algorithm of that year is a convolutional neural network model that beat all other algorithms around that time to win the ImageNet challenge and this is the focus of our whole course this quarter is to look at to have a deep dive into what convolutional neural network models are and another name for this is deep learning by by popular popular name now it's called deep learning and to look at what these models are what are the principles what are the good practices what are the recent progress of this model, but here is where the history was made is that we, around 2012 convolutional neural network model or deep learning models showed the tremendous capacity and ability in making a good progress in the field of computer vision along with several other sister fields like natural language processing and speech recognition. So without further ado I'm going to hand the rest of the lecture to to Justin to talk about the overview of CS 231n. Alright, thanks so much Fei-Fei. I'll take it over from here. So now I want to shift gears a little bit and talk a little bit more about this class CS231n. So this class focuses on one of these most, so the primary focus of this class is this image classification problem which we previewed a little bit in the contex of the ImageNet Challenge. So in image classification, again, the setup is that your algorithm looks at an image and then picks from among some fixed set of categories to classify that image. And, this might seem like somewhat of a restrictive or artificial setup, but it's actual quite general. And, this problem can be applied in many different settings both in industry and academia and many different places. So for example, you could apply this to recognizing food or recognizing calories in food or recognizing different artworks, different product out in the world. So this relatively basic tool of image classification is super useful on its own and could be applied all over the place for many different applications. But, in this course, we're also going to talk about several other visual recognition problems that build upon many of the tools that we develop for the purpose of image classification. We'll talk about other problems such as object detection or image captioning. So the setup in object detection is a little bit different. Rather than classifying an entire image as a cat or a dog or a horse or whatnot, instead we want to go in and draw bounding boxes and say that there is a dog here, and a cat here, and a car over in the background, and draw these boxes describing where objects are in the image. We'll also talk about image captioning where given an image the system now needs to produce a natural language sentence describing the image. It sounds like a really hard, complicated, and different problem, but we'll see that many of the tools that we develop in service of image classification will be reused in these other problems as well. So we mentioned this before in the context of the ImageNet Challenge, but one of the things that's really driven the progress of the field in recent years has been this adoption of convolutional neural networks or CNNs or sometimes called convnets. So if we look at the algorithms that have won the ImageNet Challenge for the last several years, in 2011 we see this method from Lin et al which is still hierarchical. It consists of multiple layers. So first we compute some features, next we compute some local invariances, some pooling, and go through several layers of processing, and then finally feed this resulting descriptor to a linear SVN. What you'll notice here is that this is still hierarchical. We're still detecting edges. We're still having notions of invariance. And, many of these intuitions will carry over into convnets. But, the breakthrough moment was really in 2012 when Jeff Hinton's group in Toronto together with Alex Krizhevsky and Ilya Sutskever who were his PHD student at that time created this seven layer convolutional neural network now known as AlexNet, then called Supervision which just did very, very well in the ImageNet competition in 2012. And, since then every year the winner of ImageNet has been a neural network. And, the trend has been that these networks are getting deeper and deeper each year. So AlexNet was a seven or eight layer neural network depending on how exactly you count things. In 2015 we had these much deeper networks. GoogleNet from Google and VGG, the VGG network from Oxford which was about 19 layers at that time. And, then in 2015 it got really crazy and this paper came out from Microsoft Research Asia called Residual Networks which were 152 layers at that time. And, since then it turns out you can get a little bit better if you go up to 200, but you run our of memory on your GPUs. We'll get into all of that later, but the main takeaway here is that convolutional neural networks really had this breakthrough moment in 2012, and since then there's been a lot of effort focused in tuning and tweaking these algorithms to make them perform better and better on this problem of image classification. And, throughout the rest of the quarter, we're going to really dive in deep, and you'll understand exactly how these different models work. But, one point that's really important, it's true that the breakthrough moment for convolutional neural networks was in 2012 when these networks performed very well on the ImageNet Challenge, but they certainly weren't invented in 2012. These algorithms had actually been around for quite a long time before that. So one of the sort of foundational works in this area of convolutional neural networks was actually in the '90s from Jan LeCun and collaborators who at that time were at Bell Labs. So in 1998 they build this convolutional neural network for recognizing digits. They wanted to deploy this and wanted to be able to automatically recognize handwritten checks or addresses for the post office. And, they built this convolutional neural network which could take in the pixels of an image and then classify either what digit it was or what letter it was or whatnot. And, the structure of this network actually look pretty similar to the AlexNet architecture that was used in 2012. Here we see that, you know, we're taking in these raw pixels. We have many layers of convolution and sub-sampling, together with the so called fully connected layers. All of which will be explained in much more detail later in the course. But, if you just kind of look at these two pictures, they look pretty similar. And, this architecture in 2012 has a lot of these architectural similarities that are shared with this network going back to the '90s. So then the question you might ask is if these algorithms were around since the '90s, why have they only suddenly become popular in the last couple of years? And, there's a couple really key innovations that happened that have changed since the '90s. One is computation. Thanks to Moore's law, we've gotten faster and faster computers every year. And, this is kind of a coarse measure, but if you just look at the number of transistors that are on chips, then that has grown by several orders of magnitude between the '90s and today. We've also had this advent of graphics processing units or GPUs which are super parallelizable and ended up being a perfect tool for really crunching these computationally intensive convolutional neural network models. So just by having more compute available, it allowed researchers to explore with larger architectures and larger models, and in some cases, just increasing the model size, but still using these kind of classical approaches and classical algorithms tends to work quite well. So this idea of increasing computation is super important in the history of deep learning. I think the second key innovation that changed between now and the '90s was data. So these algorithms are very hungry for data. You need to feed them a lot of labeled images and labeled pixels for them to eventually work quite well. And, in the '90s there just wasn't that much labeled data available. This was, again, before tools like Mechanical Turk, before the internet was super, super widely used. And, it was very difficult to collect large, varied datasets. But, now in the 2010s with datasets like PASCAL and ImageNet, there existed these relatively large, high quality labeled datasets that were, again, orders and orders magnitude bigger than the dataset available in the '90s. And, these much large datasets, again, allowed us to work with higher capacity models and train these models to actually work quite well on real world problems. But, the critical takeaway here is that convolutional neural networks although they seem like this sort of fancy, new thing that's only popped up in the last couple of years, that's really not the case. And, these class of algorithms have existed for quite a long time in their own right as well. Another thing I'd like to point out in computer vision we're in the business of trying to build machines that can see like people. And, people can actually do a lot of amazing things with their visual systems. When you go around the world, you do a lot more than just drawing boxes around the objects and classifying things as cats or dogs. Your visual system is much more powerful than that. And, as we move forward in the field, I think there's still a ton of open challenges and open problems that we need to address. And, we need to continue to develop our algorithms to do even better and tackle even more ambitious problems. Some examples of this are going back to these older ideas in fact. Things like semantic segmentation or perceptual grouping where rather than labeling the entire image, we want to understand for every pixel in the image what is it doing, what does it mean. And, we'll revisit that idea a little bit later in the course. There's definitely work going back to this idea of 3D understanding, of reconstructing the entire world, and that's still an unsolved problem I think. There're just tons and tons of other tasks that you can imagine. For example activity recognition, if I'm given a video of some person doing some activity, what's the best way to recognize that activity? That's quite a challenging problem as well. And, then as we move forward with things like augmented reality and virtual reality, and as new technologies and new types of sensors become available, I think we'll come up with a lot of new, interesting hard and challenging problems to tackle as a field. So this is an example from some of my own work in the vision lab on this dataset called Visual Genome. So here the idea is that we're trying to capture some of these intricacies in the real world. Rather than maybe describing just boxes, maybe we should be describing images as these whole large graphs of semantically related concepts that encompass not just object identities but also object relationships, object attributes, actions that are occurring in the scene, and this type of representation might allow us to capture some of this richness of the visual world that's left on the table when we're using simple classification. This is by no means a standard approach at this point, but just kind of giving you this sense that there's so much more that your visual system can do that is maybe not captured in this vanilla image classification setup. I think another really interesting work that kind of points in this direction actually comes from Fei-Fei's grad school days when she was doing her PHD at Cal Tech with her advisors there. In this setup, they had people, they stuck people, and they showed people this image for just half a second. So they flashed this image in front of them for just a very short period of time, and even in this very, very rapid exposure to an image, people were able to write these long descriptive paragraphs giving a whole story of the image. And, this is quite remarkable if you think about it that after just half a second of looking at this image, a person was able to say that this is some kind of a game or fight, two groups of men. The man on the left is throwing something. Outdoors because it seem like I have an impression of grass, and so on and so on. And, you can imagine that if a person were to look even longer at this image, they could write probably a whole novel about who these people are, and why are they in this field playing this game. They could go on and on and on roping in things from their external knowledge and their prior experience. This is in some sense the holy grail of computer vision. To sort of understand the story of an image in a very rich and deep way. And, I think that despite the massive progress in the field that we've had over the past several years, we're still quite a long way from achieving this holy grail. Another image that I think really exemplifies this idea actually comes, again, from Andrej Karpathy's blog is this amazing image. Many of you smiled, many of you laughed. I think this is a pretty funny image. But, why is it a funny image? Well we've got a man standing on a scale, and we know that people are kind of self conscious about their weight sometimes, and scales measure weight. Then we've got this other guy behind him pushing his foot down on the scale, and we know that because of the way scales work that will cause him to have an inflated reading on the scale. But, there's more. We know that this person is not just any person. This is actually Barack Obama who was at the time President of the United States, and we know that Presidents of the United States are supposed to be respectable politicians that are [laughing] probably not supposed to be playing jokes on their compatriots in this way. We know that there's these people in the background that are laughing and smiling, and we know that that means that they're understanding something about the scene. We have some understanding that they know that President Obama is this respectable guy who's looking at this other guy. Like, this is crazy. There's so much going on in this image. And, our computer vision algorithms today are actually a long way I think from this true, deep understanding of images. So I think that sort of despite the massive progress in the field, we really have a long way to go. To me, that's really exciting as a researcher 'cause I think that we'll have just a lot of really exciting, cool problems to tackle moving forward. So I hope at this point I've done a relatively good job to convince you that computer vision is really interesting. It's really exciting. It can be very useful. It can go out and make the world a better place in various ways. Computer vision could be applied in places like medical diagnosis and self-driving cars and robotics and all these different places. In addition to sort of tying back to sort of this core idea of understanding human intelligence. So to me, I think that computer vision is this fantastically amazing, interesting field, and I'm really glad that over the course of the quarter, we'll get to really dive in and dig into all these different details about how these algorithms are working these days. That's sort of my pitch about computer vision and about the history of computer vision. I don't know if there's any questions about this at this time. Okay. So then I want to talk a little bit more about the logistics of this class for the rest of the quarter. So you might ask who are we? So this class is taught by Fei-Fei Li who is a professor of computer science here at Standford who's my advisor and director of the Stanford Vision Lab and also the Stanford AI Lab. The other two instructors are me, Justin Johnson, and Serena Yeung who is up here in the front. We're both PHD students working under Fei-Fei on various computer vision problems. We have an amazing teaching staff this year of 18 TAs so far. Many of whom are sitting over here in the front. These guys are really the unsung heroes behind the scenes making the course run smoothly, making sure everything happens well. So be nice to them. [laughing] I think I also should mention this is the third time we've taught this course, and it's the first time that Andrej Karpathy has not been an instructor in this course. He was a very close friend of mine. He's still alive. He's okay, don't worry. [laughing] But, he graduated, so he's actually here I think hanging around in the lecture hall. A lot of the development and the history of this course is really due to him working on it with me over the last couple of years. So I think you should be aware of that. Also about logistics, probably the best way for keeping in touch with the course staff is through Piazza. You should all go and signup right now. Piazza is really our preferred method of communication with the class with the teaching staff. If you have questions that you're afraid of being embarrassed about asking in front of your classmates, go ahead and ask anonymously even post private questions directly to the teaching staff. So basically anything that you need should ideally go through Piazza. We also have a staff mailing list, but we ask that this is mostly for sort of personal, confidential things that you don't want going on Piazza, or if you have something that's super confidential, super personal, then feel free to directly email me or Fei-Fei or Serena about that. But, for the most part, most of your communication with the staff should be through Piazza. We also have an optional textbook this year. This is by no means required. You can go through the course totally fine without it. Everything will be self contained. This is sort of exciting because it's maybe the first textbook about deep learning that got published earlier this year by E.N. Goodfellow, Yoshua Bengio, and Aaron Courville. I put the Amazon link here in the slides. You can get it if you want to, but also the whole content of the book is free online, so you don't even have to buy it if you don't want to. So again, this is totally optional, but we'll probably be posting some readings throughout the quarter that give you an additional perspective on some of the material. So our philosophy about this class is that you should really understand the deep mechanics of all of these algorithms. You should understand at a very deep level exactly how these algorithms are working like what exactly is going on when you're stitching together these neural networks, how do these architectural decisions influence how the network is trained and tested and whatnot and all that. And, throughout the course through the assignments, you'll be implementing your own convolutional neural networks from scratch in Python. You'll be implementing the full forward and backward passes through these things, and by the end, you'll have implemented a whole convolutional neural network totally on your own. I think that's really cool. But, we also kind of practical, and we know that in most cases people are not writing these things from scratch, so we also want to give you a good introduction to some of the state of the art software tools that are used in practice for these things. So we're going to talk about some of the state of the art software packages like Tensor Flow, Torch, [Py]Torch, all these other things. And, I think you'll get some exposure to those on the homeworks and definitely through the course project as well. Another note about this course is that it's very state of the art. I think it's super exciting. This is a very fast moving field. As you saw, even these plots in the imaging challenge basically there's been a ton of progress since 2012, and like while I've been in grad school, the whole field is sort of transforming ever year. And, that's super exciting and super encouraging. But, what that means is that there's probably content that we'll cover this year that did not exist the last time that this course was taught last year. I think that's super exciting, and that's one of my favorite parts about teaching this course is just roping in all these new scientific, hot off the presses stuff and being able to present it to you guys. We're also sort of about fun. So we're going to talk about some interesting maybe not so serious topics as well this quarter including image captioning is pretty fun where we can write descriptions about images. But, we'll also cover some of these more artistic things like DeepDream here on the left where we can use neural networks to hallucinate these crazy, psychedelic images. And, by the end of the course, you'll know how that works. Or on the right, this idea of style transfer where we can take an image and render it in the style of famous artists like Picasso or Van Gogh or what not. And again, by the end of the quarter, you'll see how this stuff works. So the way the course works is we're going to have three problem sets. The first problem set will hopefully be out by the end of the week. We'll have an in class, written midterm exam. And, a large portion of your grade will be the final course project where you'll work in teams of one to three and produce some amazing project that will blow everyone's minds. We have a late policy, so you have seven late days that you're free to allocate among your different homeworks. These are meant to cover things like minor illnesses or traveling or conferences or anything like that. If you come to us at the end of the quarter and say that, "I suddenly have to give a presentation "at this conference." That's not going to be okay. That's what your late days are for. That being said, if you have some very extenuating circumstances, then do feel free to email the course staff if you have some extreme circumstances about that. Finally, I want to make a note about the collaboration policy. As Stanford students, you should all be aware of the honor code that governs the way that you should be collaborating and working together, and we take this very seriously. We encourage you to think very carefully about how you're collaborating and making sure it's within the bounds of the honor code. So in terms of prerequisites, I think the most important is probably a deep familiarity with Python because all of the programming assignments will be in Python. Some familiarity with C or C++ would be useful. You will probably not be writing any C or C++ in this course, but as you're browsing through the source code of these various software packages, being able to read C++ code at least is very useful for understanding how these packages work. We also assume that you know what calculus is, you know how to take derivatives all that sort of stuff. We assume some linear algebra. That you know what matrices are and how to multiply them and stuff like that. We can't be teaching you how to take like derivatives and stuff. We also assume a little bit of knowledge coming in of computer vision maybe at the level of CS131 or 231a. If you have taken those courses before, you'll be fine. If you haven't, I think you'll be okay in this class, but you might have a tiny bit of catching up to do. But, I think you'll probably be okay. Those are not super strict prerequisites. We also assume a little bit of background knowledge about machine learning maybe at the level of CS229. But again, I think really important, key fundamental machine learning concepts we'll reintroduce as they come up and become important. But, that being said, a familiarity with these things will be helpful going forward. So we have a course website. Go check it out. There's a lot of information and links and syllabus and all that. I think that's all that I really want to cover today. And, then later this week on Thursday, we'll really dive into our first learning algorithm and start diving into the details of these things.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_11_Detection_and_Segmentation.txt
- Hello, hi. So I want to get started. Welcome to CS 231N Lecture 11. We're going to talk about today detection segmentation and a whole bunch of other really exciting topics around core computer vision tasks. But as usual, a couple administrative notes. So last time you obviously took the midterm, we didn't have lecture, hopefully that went okay for all of you but so we're going to work on grading the midterm this week, but as a reminder please don't make any public discussions about the midterm questions or answers or whatever until at least tomorrow because there are still some people taking makeup midterms today and throughout the rest of the week so we just ask you that you refrain from talking publicly about midterm questions. Why don't you wait until Monday? [laughing] Okay, great. So we're also starting to work on midterm grading. We'll get those back to you as soon as you can, as soon as we can. We're also starting to work on grading assignment two so there's a lot of grading being done this week. The TA's are pretty busy. Also a reminder for you guys, hopefully you've been working hard on your projects now that most of you are done with the midterm so your project milestones will be due on Tuesday so any sort of last minute changes that you had in your projects, I know some people decided to switch projects after the proposal, some teams reshuffled a little bit, that's fine but your milestone should reflect the project that you're actually doing for the rest of the quarter. So hopefully that's going out well. I know there's been a lot of worry and stress on Piazza, wondering about assignment three. So we're working on that as hard as we can but that's actually a bit of a new assignment, it's changing a bit from last year so it will be out as soon as possible, hopefully today or tomorrow. Although we promise that whenever it comes out you'll have two weeks to finish it so try not to stress out about that too much. But I'm pretty excited, I think assignment three will be really cool, has a lot of cool, it'll cover a lot of really cool material. So another thing, last time in lecture we mentioned this thing called the Train Game which is this really cool thing we've been working on sort of as a side project a little bit. So this is an interactive tool that you guys can go on and use to explore a little bit the process of tuning hyperparameters in practice so we hope that, so this is again totally not required for the course. Totally optional, but if you do we will offer a small amount of extra credit for those of you who want to do well and participate on this. And we'll send out exactly some more details later this afternoon on Piazza. But just a bit of a demo for what exactly is this thing. So you'll get to go in and we've changed the name from Train Game to HyperQuest because you're questing to solve, to find the best hyperparameters for your model so this is really cool, it'll be an interactive tool that you can use to explore the training of hyperparameters interactively in your browser. So you'll login with your student ID and name. You'll fill out a little survey with some of your experience on deep learning then you'll read some instructions. So in this game you'll be shown some random data set on every trial. This data set might be images or it might be vectors and your goal is to train a model by picking the right hyperparameters interactively to perform as well as you can on the validation set of this random data set. And it'll sort of keep track of your performance over time and there'll be a leaderboard, it'll be really cool. So every time you play the game, you'll get some statistics about your data set. In this case we're doing a classification problem with 10 classes. You can see down at the bottom you have these statistics about random data set, we have 10 classes. The input data size is three by 32 by 32 so this is some image data set and we can see that in this case we have 8500 examples in the training set and 1500 examples in the validation set. These are all random, they'll change a little bit every time. Based on these data set statistics you'll make some choices on your initial learning rate, your initial network size, and your initial dropout rate. Then you'll see a screen like this where it'll run one epoch with those chosen hyperparameters, show you on the right here you'll see two plots. One is your training and validation loss for that first epoch. Then you'll see your training and validation accuracy for that first epoch and based on the gaps that you see in these two graphs you can make choices interactively to change the learning rates and hyperparameters for the next epoch. So then you can either choose to continue training with the current or changed hyperparameters, you can also stop training, or you can revert to go back to the previous checkpoint in case things got really messed up. So then you'll get to make some choice, so here we'll decide to continue training and in this case you could go and set new learning rates and new hyperparameters for the next epoch of training. You can also, kind of interesting here, you can actually grow the network interactively during training in this demo. There's this cool trick from a couple recent papers where you can either take existing layers and make them wider or add new layers to the network in the middle of training while still maintaining the same function in the network so you can do that to increase the size of your network in the middle of training here which is kind of cool. So then you'll make choices over several epochs and eventually your final validation accuracy will be recorded and we'll have some leaderboard that compares your score on that data set to some simple baseline models. And depending on how well you do on this leaderboard we'll again offer some small amounts of extra credit for those of you who choose to participate. So this is again, totally optional, but I think it can be a really cool learning experience for you guys to play around with and explore how hyperparameters affect the learning process. Also, it's really useful for us. You'll help science out by participating in this experiment. We're pretty interested in seeing how people behave when they train neural networks so you'll be helping us out as well if you decide to play this. But again, totally optional, up to you. Any questions on that? Hopefully at some point but it's. So the question was will this be a paper or whatever eventually? Hopefully but it's really early stages of this project so I can't make any promises but I hope so. But I think it'll be really cool. [laughing] Yeah, so the question is how can you add layers during training? I don't really want to get into that right now but the paper to read is Net2Net by Ian Goodfellow's one of the authors and there's another paper from Microsoft called Network Morphism. So if you read those two papers you can see how this works. Okay, so last time, a bit of a reminder before we had the midterm last time we talked about recurrent neural networks. We saw that recurrent neural networks can be used for different types of problems. In addition to one to one we can do one to many, many to one, many to many. We saw how this can apply to language modeling and we saw some cool examples of applying neural networks to model different sorts of languages at the character level and we sampled these artificial math and Shakespeare and C source code. We also saw how similar things could be applied to image captioning by connecting a CNN feature extractor together with an RNN language model. And we saw some really cool examples of that. We also talked about the different types of RNN's. We talked about this Vanilla RNN. I also want to mention that this is sometimes called a Simple RNN or an Elman RNN so you'll see all of these different terms in literature. We also talked about the Long Short Term Memory or LSTM. And we talked about how the gradient, the LSTM has this crazy set of equations but it makes sense because it helps improve gradient flow during back propagation and helps this thing model more longer term dependencies in our sequences. So today we're going to switch gears and talk about a whole bunch of different exciting tasks. We're going to talk about, so so far we've been talking about mostly the image classification problem. Today we're going to talk about various types of other computer vision tasks where you actually want to go in and say things about the spatial pixels inside your images so we'll see segmentation, localization, detection, a couple other different computer vision tasks and how you can approach these with convolutional neural networks. So as a bit of refresher, so far the main thing we've been talking about in this class is image classification so here we're going to have some input image come in. That input image will go through some deep convolutional network, that network will give us some feature vector of maybe 4096 dimensions in the case of AlexNet RGB and then from that final feature vector we'll have some fully-connected, some final fully-connected layer that gives us 1000 numbers for the different class scores that we care about where 1000 is maybe the number of classes in ImageNet in this example. And then at the end of the day what the network does is we input an image and then we output a single category label saying what is the content of this entire image as a whole. But this is maybe the most basic possible task in computer vision and there's a whole bunch of other interesting types of tasks that we might want to solve using deep learning. So today we're going to talk about several of these different tasks and step through each of these and see how they all work with deep learning. So we'll talk about these more in detail about what each problem is as we get to it but this is kind of a summary slide that we'll talk first about semantic segmentation. We'll talk about classification and localization, then we'll talk about object detection, and finally a couple brief words about instance segmentation. So first is the problem of semantic segmentation. In the problem of semantic segmentation, we want to input an image and then output a decision of a category for every pixel in that image so for every pixel in this, so this input image for example is this cat walking through the field, he's very cute. And in the output we want to say for every pixel is that pixel a cat or grass or sky or trees or background or some other set of categories. So we're going to have some set of categories just like we did in the image classification case but now rather than assigning a single category labeled to the entire image, we want to produce a category label for each pixel of the input image. And this is called semantic segmentation. So one interesting thing about semantic segmentation is that it does not differentiate instances so in this example on the right we have this image with two cows where they're standing right next to each other and when we're talking about semantic segmentation we're just labeling all the pixels independently for what is the category of that pixel. So in the case like this where we have two cows right next to each other the output does not make any distinguishing, does not distinguish between these two cows. Instead we just get a whole mass of pixels that are all labeled as cow. So this is a bit of a shortcoming of semantic segmentation and we'll see how we can fix this later when we move to instance segmentation. But at least for now we'll just talk about semantic segmentation first. So you can imagine maybe using a class, so one potential approach for attacking semantic segmentation might be through classification. So there's this, you could use this idea of a sliding window approach to semantic segmentation. So you might imagine that we take our input image and we break it up into many many small, tiny local crops of the image so in this example we've taken maybe three crops from around the head of this cow and then you could imagine taking each of those crops and now treating this as a classification problem. Saying for this crop, what is the category of the central pixel of the crop? And then we could use all the same machinery that we've developed for classifying entire images but now just apply it on crops rather than on the entire image. And this would probably work to some extent but it's probably not a very good idea. So this would end up being super super computationally expensive because we want to label every pixel in the image, we would need a separate crop for every pixel in that image and this would be super super expensive to run forward and backward passes through. And moreover, we're actually, if you think about this we can actually share computation between different patches so if you're trying to classify two patches that are right next to each other and actually overlap then the convolutional features of those patches will end up going through the same convolutional layers and we can actually share a lot of the computation when applying this to separate passes or when applying this type of approach to separate patches in the image. So this is actually a terrible idea and nobody does this and you should probably not do this but it's at least the first thing you might think of if you were trying to think about semantic segmentation. Then the next idea that works a bit better is this idea of a fully convolutional network right. So rather than extracting individual patches from the image and classifying these patches independently, we can imagine just having our network be a whole giant stack of convolutional layers with no fully connected layers or anything so in this case we just have a bunch of convolutional layers that are all maybe three by three with zero padding or something like that so that each convolutional layer preserves the spatial size of the input and now if we pass our image through a whole stack of these convolutional layers, then the final convolutional layer could just output a tensor of something by C by H by W where C is the number of categories that we care about and you could see this tensor as just giving our classification scores for every pixel in the input image at every location in the input image. And we could compute this all at once with just some giant stack of convolutional layers. And then you could imagine training this thing by putting a classification loss at every pixel of this output, taking an average over those pixels in space, and just training this kind of network through normal, regular back propagation. Question? Oh, the question is how do you develop training data for this? It's very expensive right. So the training data for this would be we need to label every pixel in those input images so there's tools that people sometimes have online where you can go in and sort of draw contours around the objects and then fill in regions but in general getting this kind of training data is very expensive. Yeah, the question is what is the loss function? So here since we're making a classification decision per pixel then we put a cross entropy loss on every pixel of the output. So we have the ground truth category label for every pixel in the output, then we compute across entropy loss between every pixel in the output and the ground truth pixels and then take either a sum or an average over space and then sum or average over the mini-batch. Question? Yeah, yeah. Yeah, the question is do we assume that we know the categories? So yes, we do assume that we know the categories up front so this is just like the image classification case. So an image classification we know at the start of training based on our data set that maybe there's 10 or 20 or 100 or 1000 classes that we care about for this data set and then here we are fixed to that set of classes that are fixed for the data set. So this model is relatively simple and you can imagine this working reasonably well assuming that you tuned all the hyperparameters right but it's kind of a problem right. So in this setup, since we're applying a bunch of convolutions that are all keeping the same spatial size of the input image, this would be super super expensive right. If you wanted to do convolutions that maybe have 64 or 128 or 256 channels for those convolutional filters which is pretty common in a lot of these networks, then running those convolutions on this high resolution input image over a sequence of layers would be extremely computationally expensive and would take a ton of memory. So in practice, you don't usually see networks with this architecture. Instead you tend to see networks that look something like this where we have some downsampling and then some upsampling of the feature map inside the image. So rather than doing all the convolutions of the full spatial resolution of the image, we'll maybe go through a small number of convolutional layers at the original resolution then downsample that feature map using something like max pooling or strided convolutions and sort of downsample, downsample, so we have convolutions in downsampling and convolutions in downsampling that look much like a lot of the classification networks that you see but now the difference is that rather than transitioning to a fully connected layer like you might do in an image classification setup, instead we want to increase the spatial resolution of our predictions in the second half of the network so that our output image can now be the same size as our input image and this ends up being much more computationally efficient because you can make the network very deep and work at a lower spatial resolution for many of the layers at the inside of the network. So we've already seen examples of downsampling when it comes to convolutional networks. We've seen that you can do strided convolutions or various types of pooling to reduce the spatial size of the image inside a network but we haven't really talked about upsampling and the question you might be wondering is what are these upsampling layers actually look like inside the network? And what are our strategies for increasing the size of a feature map inside the network? Sorry, was there a question in the back? Yeah, so the question is how do we upsample? And the answer is that's the topic of the next couple slides. [laughing] So one strategy for upsampling is something like unpooling so we have this notion of pooling to downsample so we talked about average pooling or max pooling so when we talked about average pooling we're kind of taking a spatial average within a receptive field of each pooling region. One kind of analog for upsampling is this idea of nearest neighbor unpooling. So here on the left we see this example of nearest neighbor unpooling where our input is maybe some two by two grid and our output is a four by four grid and now in our output we've done a two by two stride two nearest neighbor unpooling or upsampling where we've just duplicated that element for every point in our two by two receptive field of the unpooling region. Another thing you might see is this bed of nails unpooling or bed of nails upsampling where you'll just take, again we have a two by two receptive field for our unpooling regions and then you'll take the, in this case you make it all zeros except for one element of the unpooling region so in this case we've taken all of our inputs and always put them in the upper left hand corner of this unpooling region and everything else is zeros. And this is kind of like a bed of nails because the zeros are very flat, then you've got these things poking up for the values at these various non-zero regions. Another thing that you see sometimes which was alluded to by the question a minute ago is this idea of max unpooling so in a lot of these networks they tend to be symmetrical where we have a downsampling portion of the network and then an upsampling portion of the network with a symmetry between those two portions of the network. So sometimes what you'll see is this idea of max unpooling where for each unpooling, for each upsampling layer, it is associated with one of the pooling layers in the first half of the network and now in the first half, in the downsampling when we do max pooling we'll actually remember which element of the receptive field during max pooling was used to do the max pooling and now when we go through the rest of the network then we'll do something that looks like this bed of nails upsampling except rather than always putting the elements in the same position, instead we'll stick it into the position that was used in the corresponding max pooling step earlier in the network. I'm not sure if that explanation was clear but hopefully the picture makes sense. Yeah, so then you just end up filling the rest with zeros. So then you fill the rest with zeros and then you stick the elements from the low resolution patch up into the high resolution patch at the points where the max pooling took place at the corresponding max pooling there. Okay, so that's kind of an interesting idea. Sorry, question? Oh yeah, so the question is why is this a good idea? Why might this matter? So the idea is that when we're doing semantic segmentation we want our predictions to be pixel perfect right. We kind of want to get those sharp boundaries and those tiny details in our predictive segmentation so now if you're doing this max pooling, there's this sort of heterogeneity that's happening inside the feature map due to the max pooling where from the low resolution image you don't know, you're sort of losing spatial information in some sense by you don't know where that feature vector came from in the local receptive field after max pooling. So if you actually unpool by putting the vector in the same slot you might think that that might help us handle these fine details a little bit better and help us preserve some of that spatial information that was lost during max pooling. Question? The question is does this make things easier for back prop? Yeah, I guess, I don't think it changes the back prop dynamics too much because storing these indices is not a huge computational overhead. They're pretty small in comparison to everything else. So another thing that you'll see sometimes is this idea of transpose convolution. So transpose convolution, so for these various types of unpooling that we just talked about, these bed of nails, this nearest neighbor, this max unpooling, all of these are kind of a fixed function, they're not really learning exactly how to do the upsampling so if you think about something like strided convolution, strided convolution is kind of like a learnable layer that learns the way that the network wants to perform downsampling at that layer. And by analogy with that there's this type of layer called a transpose convolution that lets us do kind of learnable upsampling. So it will both upsample the feature map and learn some weights about how it wants to do that upsampling. And this is really just another type of convolution so to see how this works remember how a normal three by three stride one pad one convolution would work. That for this kind of normal convolution that we've seen many times now in this class, our input might by four by four, our output might be four by four, and now we'll have this three by three kernel and we'll take an inner product between, we'll plop down that kernel at the corner of the image, take an inner product, and that inner product will give us the value and the activation in the upper left hand corner of our output. And we'll repeat this for every receptive field in the image. Now if we talk about strided convolution then strided convolution ends up looking pretty similar. However, our input is maybe a four by four region and our output is a two by two region. But we still have this idea of taking, of there being some three by three filter or kernel that we plop down in the corner of the image, take an inner product and use that to compute a value of the activation and the output. But now with strided convolution the idea is that we're moving that, rather than popping down that filter at every possible point in the input, instead we're going to move the filter by two pixels in the input every time we move the filter by one pixel, every time we move by one pixel in the output. Right so this stride of two gives us a ratio between how much do we move in the input versus how much do we move in the output. So when you do a strided convolution with stride two this ends up downsampling the image or the feature map by a factor of two in kind of a learnable way. And now a transpose convolution is sort of the opposite in a way so here our input will be a two by two region and our output will be a four by four region. But now the operation that we perform with transpose convolution is a little bit different. Now so rather than taking an inner product instead what we're going to do is we're going to take the value of our input feature map at that upper left hand corner and that'll be some scalar value in the upper left hand corner. We're going to multiply the filter by that scalar value and then copy those values over to this three by three region in the output so rather than taking an inner product with our filter and the input, instead our input gives weights that we will use to weight the filter and then our output will be weighted copies of the filter that are weighted by the values in the input. And now we can do this sort of same ratio trick in order to upsample so now when we move one pixel in the input now we can plop our filter down two pixels away in the output and it's the same trick that now the blue pixel in the input is some scalar value and we'll take that scalar value, multiply it by the values in the filter, and copy those weighted filter values into this new region in the output. The tricky part is that sometimes these receptive fields in the output can overlap now and now when these receptive fields in the output overlap we just sum the results in the output. So then you can imagine repeating this everywhere and repeating this process everywhere and this ends up doing sort of a learnable upsampling where we use these learned convolutional filter weights to upsample the image and increase the spatial size. By the way, you'll see this operation go by a lot of different names in literature. Sometimes this gets called things like deconvolution which I think is kind of a bad name but you'll see it out there in papers so from a signal processing perspective deconvolution means the inverse operation to convolution which this is not however you'll frequently see this type of layer called a deconvolution layer in some deep learning papers so be aware of that, watch out for that terminology. You'll also sometimes see this called upconvolution which is kind of a cute name. Sometimes it gets called fractionally strided convolution because if we think of the stride as the ratio in step between the input and the output then now this is something like a stride one half convolution because of this ratio of one to two between steps in the input and steps in the output. This also sometimes gets called a backwards strided convolution because if you think about it and work through the math this ends up being the same, the forward pass of a transpose convolution ends up being the same mathematical operation as the backwards pass in a normal convolution so you might have to take my word for it, that might not be super obvious when you first look at this but that's kind of a neat fact so you'll sometimes see that name as well. And as maybe a bit of a more concrete example of what this looks like I think it's maybe a little easier to see in one dimension so if we imagine, so here we're doing a three by three transpose convolution in one dimension. Sorry, not three by three, a three by one transpose convolution in one dimension. So our filter here is just three numbers. Our input is two numbers and now you can see that in our output we've taken the values in the input, used them to weight the values of the filter and plopped down those weighted filters in the output with a stride of two and now where these receptive fields overlap in the output then we sum. So you might be wondering, this is kind of a funny name. Where does the name transpose convolution come from and why is that actually my preferred name for this operation? So that comes from this kind of neat interpretation of convolution. So it turns out that any time you do convolution you can always write convolution as a matrix multiplication. So again, this is kind of easier to see with a one-dimensional example but here we've got some weight. So we're doing a one-dimensional convolution of a weight vector x which has three elements, and an input vector, a vector, which has four elements, A, B, C, D. So here we're doing a three by one convolution with stride one and you can see that we can frame this whole operation as a matrix multiplication where we take our convolutional kernel x and turn it into some matrix capital X which contains copies of that convolutional kernel that are offset by different regions. And now we can take this giant weight matrix X and do a matrix vector multiplication between x and our input a and this just produces the same result as convolution. And now with transpose convolution means that we're going to take this same weight matrix but now we're going to multiply by the transpose of that same weight matrix. So here you can see the same example for this stride one convolution on the left and the corresponding stride one transpose convolution on the right. And if you work through the details you'll see that when it comes to stride one, a stride one transpose convolution also ends up being a stride one normal convolution so there's a little bit of details in the way that the border and the padding are handled but it's fundamentally the same operation. But now things look different when you talk about a stride of two. So again, here on the left we can take a stride two convolution and write out this stride two convolution as a matrix multiplication. And now the corresponding transpose convolution is no longer a convolution so if you look through this weight matrix and think about how convolutions end up getting represented in this way then now this transposed matrix for the stride two convolution is something fundamentally different from the original normal convolution operation so that's kind of the reasoning behind the name and that's why I think that's kind of the nicest name to call this operation by. Sorry, was there a question? Sorry? It's very possible there's a typo in the slide so please point out on Piazza and I'll fix it but I hope the idea was clear. Is there another question? Okay, thank you [laughing]. Yeah, so, oh no lots of questions. Yeah, so the issue is why do we sum and not average? So the reason we sum is due to this transpose convolution formula zone so that's the reason why we sum but you're right that you actually, this is kind of a problem that the magnitudes will actually vary in the output depending on how many receptive fields were in the output. So actually in practice this is something that people started to point out very recently and somewhat switched away from this stride, so using three by three stride two transpose convolution upsampling can sometimes produce some checkerboard artifacts in the output exactly due to that problem. So what I've seen in a couple more recent papers is maybe to use four by four stride two or two by two stride two transpose convolution for upsampling and that helps alleviate that problem a little bit. Yeah, so the question is what is a stride half convolution and where does that terminology come from? I think that was from my paper. So that was actually, yes that was definitely this. So at the time I was writing that paper I was kind of into the name fractionally strided convolution but after thinking about it a bit more I think transpose convolution is probably the right name. So then this idea of semantic segmentation actually ends up being pretty natural. You just have this giant convolutional network with downsampling and upsampling inside the network and now our downsampling will be by strided convolution or pooling, our upsampling will be by transpose convolution or various types of unpooling or upsampling and we can train this whole thing end to end with back propagation using this cross entropy loss over every pixel. So this is actually pretty cool that we can take a lot of the same machinery that we already learned for image classification and now just apply it very easily to extend to new types of problems so that's super cool. So the next task that I want to talk about is this idea of classification plus localization. So we've talked about image classification a lot where we want to just assign a category label to the input image but sometimes you might want to know a little bit more about the image. In addition to predicting what the category is, in this case the cat, you might also want to know where is that object in the image? So in addition to predicting the category label cat, you might also want to draw a bounding box around the region of the cat in that image. And classification plus localization, the distinction here between this and object detection is that in the localization scenario you assume ahead of time that you know there's exactly one object in the image that you're looking for or maybe more than one but you know ahead of time that we're going to make some classification decision about this image and we're going to produce exactly one bounding box that's going to tell us where that object is located in the image so we sometimes call that task classification plus localization. And again, we can reuse a lot of the same machinery that we've already learned from image classification in order to tackle this problem. So kind of a basic architecture for this problem looks something like this. So again, we have our input image, we feed our input image through some giant convolutional network, this is Alex, this is AlexNet for example, which will give us some final vector summarizing the content of the image. Then just like before we'll have some fully connected layer that goes from that final vector to our class scores. But now we'll also have another fully connected layer that goes from that vector to four numbers. Where the four numbers are something like the height, the width, and the x and y positions of that bounding box. And now our network will produce these two different outputs, one is this set of class scores, and the other are these four numbers giving the coordinates of the bounding box in the input image. And now during training time, when we train this network we'll actually have two losses so in this scenario we're sort of assuming a fully supervised setting so we assume that each of our training images is annotated with both a category label and also a ground truth bounding box for that category in the image. So now we have two loss functions. We have our favorite softmax loss that we compute using the ground truth category label and the predicted class scores, and we also have some kind of loss that gives us some measure of dissimilarity between our predicted coordinates for the bounding box and our actual coordinates for the bounding box. So one very simple thing is to just take an L2 loss between those two and that's kind of the simplest thing that you'll see in practice although sometimes people play around with this and maybe use L1 or smooth L1 or they parametrize the bounding box a little bit differently but the idea is always the same, that you have some regression loss between your predicted bounding box coordinates and the ground truth bounding box coordinates. Question? Sorry, go ahead. So the question is, is this a good idea to do all at the same time? Like what happens if you misclassify, should you even look at the box coordinates? So sometimes people get fancy with it, so in general it works okay. It's not a big problem, you can actually train a network to do both of these things at the same time and it'll figure it out but sometimes things can get tricky in terms of misclassification so sometimes what you'll see for example is that rather than predicting a single box you might make predictions like a separate prediction of the box for each category and then only apply loss to the predicted box corresponding to the ground truth category. So people do get a little bit fancy with these things that sometimes helps a bit in practice. But at least this basic setup, it might not be perfect or it might not be optimal but it will work and it will do something. Was there a question in the back? Yeah, so that's the question is do these losses have different units, do they dominate the gradient? So this is what we call a multi-task loss so whenever we're taking derivatives we always want to take derivative of a scalar with respect to our network parameters and use that derivative to take gradient steps. But now we've got two scalars that we want to both minimize so what you tend to do in practice is have some additional hyperparameter that gives you some weighting between these two losses so you'll take a weighted sum of these two different loss functions to give our final scalar loss. And then you'll take your gradients with respect to this weighted sum of the two losses. And this ends up being really really tricky because this weighting parameter is a hyperparameter that you need to set but it's kind of different from some of the other hyperparameters that we've seen so far in the past right because this weighting hyperparameter actually changes the value of the loss function so one thing that you might often look at when you're trying to set hyperparameters is you might make different hyperparameter choices and see what happens to the loss under different choices of hyperparameters. But in this case because the loss actually, because the hyperparameter affects the absolute value of the loss making those comparisons becomes kind of tricky. So setting that hyperparameter is somewhat difficult. And in practice, you kind of need to take it on a case by case basis for exactly the problem you're solving but my general strategy for this is to have some other metric of performance that you care about other than the actual loss value which then you actually use that final performance metric to make your cross validation choices rather than looking at the value of the loss to make those choices. Question? So the question is why do we do this all at once? Why not do this separately? Yeah, so the question is why don't we fix the big network and then just only learn separate fully connected layers for these two tasks? People do do that sometimes and in fact that's probably the first thing you should try if you're faced with a situation like this but in general whenever you're doing transfer learning you always get better performance if you fine tune the whole system jointly because there's probably some mismatch between the features, if you train on ImageNet and then you use that network for your data set you're going to get better performance on your data set if you can also change the network. But one trick you might see in practice sometimes is that you might freeze that network then train those two things separately until convergence and then after they converge then you go back and jointly fine tune the whole system. So that's a trick that sometimes people do in practice in that situation. And as I've kind of alluded to this big network is often a pre-trained network that is taken from ImageNet for example. So a bit of an aside, this idea of predicting some fixed number of positions in the image can be applied to a lot of different problems beyond just classification plus localization. One kind of cool example is human pose estimation. So here we want to take an input image is a picture of a person. We want to output the positions of the joints for that person and this actually allows the network to predict what is the pose of the human. Where are his arms, where are his legs, stuff like that, and generally most people have the same number of joints. That's a bit of a simplifying assumption, it might not always be true but it works for the network. So for example one parameterization that you might see in some data sets is define a person's pose by 14 joint positions. Their feet and their knees and their hips and something like that and now when we train the network then we're going to input this image of a person and now we're going to output 14 numbers in this case giving the x and y coordinates for each of those 14 joints. And then you apply some kind of regression loss on each of those 14 different predicted points and just train this network with back propagation again. Yeah, so you might see an L2 loss but people play around with other regression losses here as well. Question? So the question is what do I mean when I say regression loss? So I mean something other than cross entropy or softmax right. When I say regression loss I usually mean like an L2 Euclidean loss or an L1 loss or sometimes a smooth L1 loss. But in general classification versus regression is whether your output is categorical or continuous so if you're expecting a categorical output like you ultimately want to make a classification decision over some fixed number of categories then you'll think about a cross entropy loss, softmax loss or these SVM margin type losses that we talked about already in the class. But if your expected output is to be some continuous value, in this case the position of these points, then your output is continuous so you tend to use different types of losses in those situations. Typically an L2, L1, different kinds of things there. So sorry for not clarifying that earlier. But the bigger point here is that for any time you know that you want to make some fixed number of outputs from your network, if you know for example. Maybe you knew that you wanted to, you knew that you always are going to have pictures of a cat and a dog and you want to predict both the bounding box of the cat and the bounding box of the dog in that case you'd know that you have a fixed number of outputs for each input so you might imagine hooking up this type of regression classification plus localization framework for that problem as well. So this idea of some fixed number of regression outputs can be applied to a lot of different problems including pose estimation. So the next task that I want to talk about is object detection and this is a really meaty topic. This is kind of a core problem in computer vision and you could probably teach a whole seminar class on just the history of object detection and various techniques applied there. So I'll be relatively brief and try to go over the main big ideas of object detection plus deep learning that have been used in the last couple of years. But the idea in object detection is that we again start with some fixed set of categories that we care about, maybe cats and dogs and fish or whatever but some fixed set of categories that we're interested in. And now our task is that given our input image, every time one of those categories appears in the image, we want to draw a box around it and we want to predict the category of that box so this is different from classification plus localization because there might be a varying number of outputs for every input image. You don't know ahead of time how many objects you expect to find in each image so that's, this ends up being a pretty challenging problem. So we've seen graphs, so this is kind of interesting. We've seen this graph many times of the ImageNet classification performance as a function of years and we saw that it just got better and better every year and there's been a similar trend with object detection because object detection has again been one of these core problems in computer vision that people have cared about for a very long time. So this slide is due to Ross Girshick who's worked on this problem a lot and it shows the progression of object detection performance on this one particular data set called PASCAL VOC which has been relatively used for a long time in the object detection community. And you can see that up until about 2012 performance on object detection started to stagnate and slow down a little bit and then in 2013 was when some of the first deep learning approaches to object detection came around and you could see that performance just shot up very quickly getting better and better year over year. One thing you might notice is that this plot ends in 2015 and it's actually continued to go up since then so the current state of the art in this data set is well over 80 and in fact a lot of recent papers don't even report results on this data set anymore because it's considered too easy. So it's a little bit hard to know, I'm not actually sure what is the state of the art number on this data set but it's off the top of this plot. Sorry, did you have a question? Nevermind. Okay, so as I already said this is different from localization because there might be differing numbers of objects for each image. So for example in this cat on the upper left there's only one object so we only need to predict four numbers but now for this image in the middle there's three animals there so we need our network to predict 12 numbers, four coordinates for each bounding box. Or in this example of many many ducks then you want your network to predict a whole bunch of numbers. Again, four numbers for each duck. So this is quite different from object detection. Sorry object detection is quite different from localization because in object detection you might have varying numbers of objects in the image and you don't know ahead of time how many you expect to find. So as a result, it's kind of tricky if you want to think of object detection as a regression problem. So instead, people tend to work, use kind of a different paradigm when thinking about object detection. So one approach that's very common and has been used for a long time in computer vision is this idea of sliding window approaches to object detection. So this is kind of similar to this idea of taking small patches and applying that for semantic segmentation and we can apply a similar idea for object detection. So the ideas is that we'll take different crops from the input image, in this case we've got this crop in the lower left hand corner of our image and now we take that crop, feed it through our convolutional network and our convolutional network does a classification decision on that input crop. It'll say that there's no dog here, there's no cat here, and then in addition to the categories that we care about we'll add an additional category called background and now our network can predict background in case it doesn't see any of the categories that we care about, so then when we take this crop from the lower left hand corner here then our network would hopefully predict background and say that no, there's no object here. Now if we take a different crop then our network would predict dog yes, cat no, background no. We take a different crop we get dog yes, cat no, background no. Or a different crop, dog no, cat yes, background no. Does anyone see a problem here? Yeah, the question is how do you choose the crops? So this is a huge problem right. Because there could be any number of objects in this image, these objects could appear at any location in the image, these objects could appear at any size in the image, these objects could also appear at any aspect ratio in the image, so if you want to do kind of a brute force sliding window approach you'd end up having to test thousands, tens of thousands, many many many many different crops in order to tackle this problem with a brute force sliding window approach. And in the case where every one of those crops is going to be fed through a giant convolutional network, this would be completely computationally intractable. So in practice people don't ever do this sort of brute force sliding window approach for object detection using convolutional networks. Instead there's this cool line of work called region proposals that comes from, this is not using deep learning typically. These are slightly more traditional computer vision techniques but the idea is that a region proposal network kind of uses more traditional signal processing, image processing type things to make some list of proposals for where, so given an input image, a region proposal network will then give you something like a thousand boxes where an object might be present. So you can imagine that maybe we do some local, we look for edges in the image and try to draw boxes that contain closed edges or something like that. These various types of image processing approaches, but these region proposal networks will basically look for blobby regions in our input image and then give us some set of candidate proposal regions where objects might be potentially found. And these are relatively fast-ish to run so one common example of a region proposal method that you might see is something called Selective Search which I think actually gives you 2000 region proposals, not the 1000 that it says on the slide. So you kind of run this thing and then after about two seconds of turning on your CPU it'll spit out 2000 region proposals in the input image where objects are likely to be found so there'll be a lot of noise in those. Most of them will not be true objects but there's a pretty high recall. If there is an object in the image then it does tend to get covered by these region proposals from Selective Search. So now rather than applying our classification network to every possible location and scale in the image instead what we can do is first apply one of these region proposal networks to get some set of proposal regions where objects are likely located and now apply a convolutional network for classification to each of these proposal regions and this will end up being much more computationally tractable than trying to do all possible locations and scales. And this idea all came together in this paper called R-CNN from a few years ago that does exactly that. So given our input image in this case we'll run some region proposal network to get our proposals, these are also sometimes called regions of interest or ROI's so again Selective Search gives you something like 2000 regions of interest. Now one of the problems here is that these input, these regions in the input image could have different sizes but if we're going to run them all through a convolutional network our classification, our convolutional networks for classification all want images of the same input size typically due to the fully connected net layers and whatnot so we need to take each of these region proposals and warp them to that fixed square size that is expected as input to our downstream network. So we'll crop out those region proposal, those regions corresponding to the region proposals, we'll warp them to that fixed size, and then we'll run each of them through a convolutional network which will then use in this case an SVM to make a classification decision for each of those, to predict categories for each of those crops. And then I lost a slide. But it'll also, not shown in the slide right now but in addition R-CNN also predicts a regression, like a correction to the bounding box in addition for each of these input region proposals because the problem is that your input region proposals are kind of generally in the right position for an object but they might not be perfect so in addition R-CNN will, in addition to category labels for each of these proposals, it'll also predict four numbers that are kind of an offset or a correction to the box that was predicted at the region proposal stage. So then again, this is a multi-task loss and you would train this whole thing. Sorry was there a question? The question is how much does the change in aspect ratio impact accuracy? It's a little bit hard to say. I think there's some controlled experiments in some of these papers but I'm not sure I can give a generic answer to that. Question? The question is is it necessary for regions of interest to be rectangles? So they typically are because it's tough to warp these non-region things but once you move to something like instant segmentation then you sometimes get proposals that are not rectangles. If you actually do care about predicting things that are not rectangles. Is there another question? Yeah, so the question is are the region proposals learned so in R-CNN it's a traditional thing. These are not learned, this is kind of some fixed algorithm that someone wrote down but we'll see in a couple minutes that we can actually, we've changed that a little bit in the last couple of years. Is there another question? The question is is the offset always inside the region of interest? The answer is no, it doesn't have to be. You might imagine that suppose the region of interest put a box around a person but missed the head then you could imagine the network inferring that oh this is a person but people usually have heads so the network showed the box should be a little bit higher. So sometimes the final predicted boxes will be outside the region of interest. Question? Yeah. Yeah the question is you have a lot of ROI's that don't correspond to true objects? And like we said, in addition to the classes that you actually care about you add an additional background class so your class scores can also predict background to say that there was no object here. Question? Yeah, so the question is what kind of data do we need and yeah, this is fully supervised in the sense that our training data has each image, consists of images. Each image has all the object categories marked with bounding boxes for each instance of that category. There are definitely papers that try to approach this like oh what if you don't have the data. What if you only have that data for some images? Or what if that data is noisy but at least in the generic case you assume full supervision of all objects in the images at training time. Okay, so I think we've kind of alluded to this but there's kind of a lot of problems with this R-CNN framework. And actually if you look at the figure here on the right you can see that additional bounding box head so I'll put it back. But this is kind of still computationally pretty expensive because if we've got 2000 region proposals, we're running each of those proposals independently, that can be pretty expensive. There's also this question of relying on this fixed region proposal network, this fixed region proposals, we're not learning them so that's kind of a problem. And just in practice it ends up being pretty slow so in the original implementation R-CNN would actually dump all the features to disk so it'd take hundreds of gigabytes of disk space to store all these features. Then training would be super slow since you have to make all these different forward and backward passes through the image and it took something like 84 hours is one number they've recorded for training time so this is super super slow. And now at test time it's also super slow, something like roughly 30 seconds minute per image because you need to run thousands of forward passes through the convolutional network for each of these region proposals so this ends up being pretty slow. Thankfully we have fast R-CNN that fixed a lot of these problems so when we do fast R-CNN then it's going to look kind of the same. We're going to start with our input image but now rather than processing each region of interest separately instead we're going to run the entire image through some convolutional layers all at once to give this high resolution convolutional feature map corresponding to the entire image. And now we still are using some region proposals from some fixed thing like Selective Search but rather than cropping out the pixels of the image corresponding to the region proposals, instead we imagine projecting those region proposals onto this convolutional feature map and then taking crops from the convolutional feature map corresponding to each proposal rather than taking crops directly from the image. And this allows us to reuse a lot of this expensive convolutional computation across the entire image when we have many many crops per image. But again, if we have some fully connected layers downstream those fully connected layers are expecting some fixed-size input so now we need to do some reshaping of those crops from the convolutional feature map and they do that in a differentiable way using something they call an ROI pooling layer. Once you have these warped crops from the convolutional feature map then you can run these things through some fully connected layers and predict your classification scores and your linear regression offsets to the bounding boxes. And now when we train this thing then we again have a multi-task loss that trades off between these two constraints and during back propagation we can back prop through this entire thing and learn it all jointly. This ROI pooling, it looks kind of like max pooling. I don't really want to get into the details of that right now. And in terms of speed if we look at R-CNN versus fast R-CNN versus this other model called SPP net which is kind of in between the two, then you can see that at training time fast R-CNN is something like 10 times faster to train because we're sharing all this computation between different feature maps. And now at test time fast R-CNN is super fast and in fact fast R-CNN is so fast at test time that its computation time is actually dominated by computing region proposals. So we said that computing these 2000 region proposals using Selective Search takes something like two seconds and now once we've got all these region proposals then because we're processing them all sort of in a shared way by sharing these expensive convolutions across the entire image that we can process all of these region proposals in less than a second altogether. So fast R-CNN ends up being bottlenecked by just the computing of these region proposals. Thankfully we've solved this problem with faster R-CNN. So the idea in faster R-CNN is to just make, so the problem was the computing the region proposals using this fixed function was a bottleneck. So instead we'll just make the network itself predict its own region proposals. And so the way that this sort of works is that again, we take our input image, run the entire input image altogether through some convolutional layers to get some convolutional feature map representing the entire high resolution image and now there's a separate region proposal network which works on top of those convolutional features and predicts its own region proposals inside the network. Now once we have those predicted region proposals then it looks just like fast R-CNN where now we take crops from those region proposals from the convolutional features, pass them up to the rest of the network. And now we talked about multi-task losses and multi-task training networks to do multiple things at once. Well now we're telling the network to do four things all at once so balancing out this four-way multi-task loss is kind of tricky. But because the region proposal network needs to do two things: it needs to say for each potential proposal is it an object or not an object, it needs to actually regress the bounding box coordinates for each of those proposals, and now the final network at the end needs to do these two things again. Make final classification decisions for what are the class scores for each of these proposals, and also have a second round of bounding box regression to again correct any errors that may have come from the region proposal stage. Question? So the question is that sometimes multi-task learning might be seen as regularization and are we getting that affect here? I'm not sure if there's been super controlled studies on that but actually in the original version of the faster R-CNN paper they did a little bit of experimentation like what if we share the region proposal network, what if we don't share? What if we learn separate convolutional networks for the region proposal network versus the classification network? And I think there were minor differences but it wasn't a dramatic difference either way. So in practice it's kind of nicer to only learn one because it's computationally cheaper. Sorry, question? Yeah the question is how do you train this region proposal network because you don't know, you don't have ground truth region proposals for the region proposal network. So that's a little bit hairy. I don't want to get too much into those details but the idea is that at any time you have a region proposal which has more than some threshold of overlap with any of the ground truth objects then you say that that is the positive region proposal and you should predict that as the region proposal and any potential proposal which has very low overlap with any ground truth objects should be predicted as a negative. But there's a lot of dark magic hyperparameters in that process and that's a little bit hairy. Question? Yeah, so the question is what is the classification loss on the region proposal network and the answer is that it's making a binary, so I didn't want to get into too much of the details of that architecture 'cause it's a little bit hairy but it's making binary decisions. So it has some set of potential regions that it's considering and it's making a binary decision for each one. Is this an object or not an object? So it's like a binary classification loss. So once you train this thing then faster R-CNN ends up being pretty darn fast. So now because we've eliminated this overhead from computing region proposals outside the network, now faster R-CNN ends up being very very fast compared to these other alternatives. Also, one interesting thing is that because we're learning the region proposals here you might imagine maybe what if there was some mismatch between this fixed region proposal algorithm and my data? So in this case once you're learning your own region proposals then you can overcome that mismatch if your region proposals are somewhat weird or different than other data sets. So this whole family of R-CNN methods, R stands for region, so these are all region-based methods because there's some kind of region proposal and then we're doing some processing, some independent processing for each of those potential regions. So this whole family of methods are called these region-based methods for object detection. But there's another family of methods that you sometimes see for object detection which is sort of all feed forward in a single pass. So one of these is YOLO for You Only Look Once. And another is SSD for Single Shot Detection and these two came out somewhat around the same time. But the idea is that rather than doing independent processing for each of these potential regions instead we want to try to treat this like a regression problem and just make all these predictions all at once with some big convolutional network. So now given our input image you imagine dividing that input image into some coarse grid, in this case it's a seven by seven grid and now within each of those grid cells you imagine some set of base bounding boxes. Here I've drawn three base bounding boxes like a tall one, a wide one, and a square one but in practice you would use more than three. So now for each of these grid cells and for each of these base bounding boxes you want to predict several things. One, you want to predict an offset off the base bounding box to predict what is the true location of the object off this base bounding box. And you also want to predict classification scores so maybe a classification score for each of these base bounding boxes. How likely is it that an object of this category appears in this bounding box. So then at the end we end up predicting from our input image, we end up predicting this giant tensor of seven by seven grid by 5B + C. So that's just where we have B base bounding boxes, we have five numbers for each giving our offset and our confidence for that base bounding box and C classification scores for our C categories. So then we kind of see object detection as this input of an image, output of this three dimensional tensor and you can imagine just training this whole thing with a giant convolutional network. And that's kind of what these single shot methods do where they just, and again matching the ground truth objects into these potential base boxes becomes a little bit hairy but that's what these methods do. And by the way, the region proposal network that gets used in faster R-CNN ends up looking quite similar to these where they have some set of base bounding boxes over some gridded image, another region proposal network does some regression plus some classification. So there's kind of some overlapping ideas here. So in faster R-CNN we're kind of treating the object, the region proposal step as kind of this fixed end-to-end regression problem and then we do the separate per region processing but now with these single shot methods we only do that first step and just do all of our object detection with a single forward pass. So object detection has a ton of different variables. There could be different base networks like VGG, ResNet, we've seen different metastrategies for object detection including this faster R-CNN type region based family of methods, this single shot detection family of methods. There's kind of a hybrid that I didn't talk about called R-FCN which is somewhat in between. There's a lot of different hyperparameters like what is the image size, how many region proposals do you use. And there's actually this really cool paper that will appear at CVPR this summer that does a really controlled experimentation around a lot of these different variables and tries to tell you how do these methods all perform under these different variables. So if you're interested I'd encourage you to check it out but kind of one of the key takeaways is that the faster R-CNN style of region based methods tends to give higher accuracies but ends up being much slower than the single shot methods because the single shot methods don't require this per region processing. But I encourage you to check out this paper if you want more details. Also as a bit of aside, I had this fun paper with Andre a couple years ago that kind of combined object detection with image captioning and did this problem called dense captioning so now the idea is that rather than predicting a fixed category label for each region, instead we want to write a caption for each region. And again, we had some data set that had this sort of data where we had a data set of regions together with captions and then we sort of trained this giant end-to-end model that just predicted these captions all jointly. And this ends up looking somewhat like faster R-CNN where you have some region proposal stage then a bounding box, then some per region processing. But rather than a SVM or a softmax loss instead those per region processing has a whole RNN language model that predicts a caption for each region. So that ends up looking quite a bit like faster R-CNN. There's a video here but I think we're running out of time so I'll skip it. But the idea here is that once you have this, you can kind of tie together a lot of these ideas and if you have some new problem that you're interested in tackling like dense captioning, you can recycle a lot of the components that you've learned from other problems like object detection and image captioning and kind of stitch together one end-to-end network that produces the outputs that you care about for your problem. So the last task that I want to talk about is this idea of instance segmentation. So here instance segmentation is in some ways like the full problem We're given an input image and we want to predict one, the locations and identities of objects in that image similar to object detection, but rather than just predicting a bounding box for each of those objects, instead we want to predict a whole segmentation mask for each of those objects and predict which pixels in the input image corresponds to each object instance. So this is kind of like a hybrid between semantic segmentation and object detection because like object detection we can handle multiple objects and we differentiate the identities of different instances so in this example since there are two dogs in the image and instance segmentation method actually distinguishes between the two dog instances and the output and kind of like semantic segmentation we have this pixel wise accuracy where for each of these objects we want to say which pixels belong to that object. So there's been a lot of different methods that people have tackled, for instance segmentation as well, but the current state of the art is this new paper called Mask R-CNN that actually just came out on archive about a month ago so this is not yet published, this is like super fresh stuff. And this ends up looking a lot like faster R-CNN. So it has this multi-stage processing approach where we take our whole input image, that whole input image goes into some convolutional network and some learned region proposal network that's exactly the same as faster R-CNN and now once we have our learned region proposals then we project those proposals onto our convolutional feature map just like we did in fast and faster R-CNN. But now rather than just making a classification and a bounding box for regression decision for each of those boxes we in addition want to predict a segmentation mask for each of those bounding box, for each of those region proposals. So now it kind of looks like a mini, like a semantic segmentation problem inside each of the region proposals that we're getting from our region proposal network. So now after we do this ROI aligning to warp our features corresponding to the region of proposal into the right shape, then we have two different branches. One branch will come up that looks exact, and this first branch at the top looks just like faster R-CNN and it will predict classification scores telling us what is the category corresponding to that region of proposal or alternatively whether or not it's background. And we'll also predict some bounding box coordinates that regressed off the region proposal coordinates. And now in addition we'll have this branch at the bottom which looks basically like a semantic segmentation mini network which will classify for each pixel in that input region proposal whether or not it's an object so this mask R-CNN problem, this mask R-CNN architecture just kind of unifies all of these different problems that we've been talking about today into one nice jointly end-to-end trainable model. And it's really cool and it actually works really really well so when you look at the examples in the paper they're kind of amazing. They look kind of indistinguishable from ground truth. So in this example on the left you can see that there are these two people standing in front of motorcycles, it's drawn the boxes around these people, it's also gone in and labeled all the pixels of those people and it's really small but actually in the background on that image on the left there's also a whole crowd of people standing very small in the background. It's also drawn boxes around each of those and grabbed the pixels of each of those images. And you can see that this is just, it ends up working really really well and it's a relatively simple addition on top of the existing faster R-CNN framework. So I told you that mask R-CNN unifies everything we talked about today and it also does pose estimation by the way. So we talked about, you can do pose estimation by predicting these joint coordinates for each of the joints of the person so you can do mask R-CNN to do joint object detection, pose estimation, and instance segmentation. And the only addition we need to make is that for each of these region proposals we add an additional little branch that predicts these coordinates of the joints for the instance of the current region proposal. So now this is just another loss, like another layer that we add, another head coming out of the network and an additional term in our multi-task loss. But once we add this one little branch then you can do all of these different problems jointly and you get results looking something like this. Where now this network, like a single feed forward network is deciding how many people are in the image, detecting where those people are, figuring out the pixels corresponding to each of those people and also drawing a skeleton estimating the pose of those people and this works really well even in crowded scenes like this classroom where there's a ton of people sitting and they all overlap each other and it just seems to work incredibly well. And because it's built on the faster R-CNN framework it also runs relatively close to real time so this is running something like five frames per second on a GPU because this is all sort of done in the single forward pass of the network. So this is again, a super new paper but I think that this will probably get a lot of attention in the coming months. So just to recap, we've talked. Sorry question? The question is how much training data do you need? So all of these instant segmentation results were trained on the Microsoft Coco data set so Microsoft Coco is roughly 200,000 training images. It has 80 categories that it cares about so in each of those 200,000 training images it has all the instances of those 80 categories labeled. So there's something like 200,000 images for training and there's something like I think an average of fivee or six instances per image. So it actually is quite a lot of data. And for Microsoft Coco for all the people in Microsoft Coco they also have all the joints annotated as well so this actually does have quite a lot of supervision at training time you're right, and actually is trained with quite a lot of data. So I think one really interesting topic to study moving forward is that we kind of know that if you have a lot of data to solve some problem, at this point we're relatively confident that you can stitch up some convolutional network that can probably do a reasonable job at that problem but figuring out ways to get performance like this with less training data is a super interesting and active area of research and I think that's something people will be spending a lot of their efforts working on in the next few years. So just to recap, today we had kind of a whirlwind tour of a whole bunch of different computer vision topics and we saw how a lot of the machinery that we built up from image classification can be applied relatively easily to tackle these different computer vision topics. And next time we'll talk about, we'll have a really fun lecture on visualizing CNN features. Well also talk about DeepDream and neural style transfer.
Lecture_Collection_Convolutional_Neural_Networks_for_Visual_Recognition_Spring_2017
Lecture_12_Visualizing_and_Understanding.txt
- Good morning. So, it's 12:03 so, I want to get started. Welcome to Lecture 12, of CS-231N. Today we are going to talk about Visualizing and Understanding convolutional networks. This is always a super fun lecture to give because we get to look a lot of pretty pictures. So, it's, it's one of my favorites. As usual a couple administrative things. So, hopefully your projects are all going well, because as a reminder your milestones are due on Canvas tonight. It is Canvas, right? Okay, so want to double check, yeah. Due on Canvas tonight, we are working on furiously grading your midterms. So, we'll hope to have those midterms grades to you back by on grade scope this week. So, I know that was little confusion, you all got registration email's for grade scope probably in the last week. Something like that, we start couple of questions on piazo. So, we've decided to use grade scope to grade the midterms. So, don't be confused, if you get some emails about that. Another reminder is that assignment three was released last week on Friday. It will be due, a week from this Friday, on the 26th. This is, an assignment three, is almost entirely brand new this year. So, it we apologize for taking a little bit longer than expected to get it out. But I think it's super cool. A lot of that stuff, we'll talk about in today's lecture. You'll actually be implementing on your assignment. And for the assignment, you'll get the choice of either Pi torch or tensure flow. To work through these different examples. So, we hope that's really useful experience for you guys. We also saw a lot of activity on HyperQuest over the weekend. So that's, that's really awesome. The leader board went up yesterday. It seems like you guys are really trying to battle it out to show off your deep learning neural network training skills. So that's super cool. And we because due to the high interest in HyperQuest and due to the conflicts with the, with the Milestones submission time. We decided to extend the deadline for extra credit through Sunday. So, anyone who does at least 12 runs on HyperQuest by Sunday will get little bit of extra credit in the class. Also those of you who are, at the top of leader board doing really well, will get may be little bit extra, extra credit. So, I thanks for participating we got lot of interest and that was really cool. Final reminder is about the poster session. So, we have the poster session will be on June 6th. That date is finalized, I think that, I don't remember the exact time. But it is June 6th. So that, we have some questions about when exactly that poster session is for those of you who are traveling at the end of quarter or starting internships or something like that. So, it will be June 6th. Any questions on the admin notes. No, totally clear. So, last time we talked. So, last time we had a pretty jam packed lecture, when we talked about lot of different computer vision tasks, as a reminder. We talked about semantic segmentation which is this problem, where you want to sign labels to every pixel in the input image. But does not differentiate the object instances in those images. We talked about classification plus localization. Where in addition to a class label you also want to draw a box or perhaps several boxes in the image. Where the distinction here is that, in a classification plus localization setup. You have some fix number of objects that you are looking for So, we also saw that this type of paradigm can be applied to the things like pose recognition. Where you want to regress to different numbers of joints in the human body. We also talked about the object detection where you start with some fixed set of category labels that you are interested in. Like dogs and cats. And then the task is to draw a boxes around every instance of those objects that appear in the input image. And object detection is really distinct from classification plus localization because with object detection, we don't know ahead of time, how many object instances we're looking for in the image. And we saw that there's this whole family of methods based on RCNN, Fast RCNN and faster RCNN, as well as the single shot detection methods for addressing this problem of object detection. Then finally we talked pretty briefly about instance segmentation, which is kind of combining aspects of a semantic segmentation and object detection where the goal is to detect all the instances of the categories we care about, as well as label the pixels belonging to each instance. So, in this case, we detected two dogs and one cat and for each of those instances we wanted to label all the pixels. So, these are we kind of covered a lot last lecture but those are really interesting and exciting problems that you guys might consider to using in parts of your projects. But today we are going to shift gears a little bit and ask another question. Which is, what's really going on inside convolutional networks. We've seen by this point in the class how to train convolutional networks. How to stitch up different types of architectures to attack different problems. But one question that you might have had in your mind, is what exactly is going on inside these networks? How did they do the things that they do? What kinds of features are they looking for? And all this source of related questions. So, so far we've sort of seen ConvNets as a little bit of a black box. Where some input image of raw pixels is coming in on one side. It goes to the many layers of convulsion and pooling in different sorts of transformations. And on the outside, we end up with some set of class scores or some types of understandable interpretable output. Such as class scores or bounding box positions or labeled pixels or something like that. But the question is. What are all these other layers in the middle doing? What kinds of things in the input image are they looking for? And can we try again intuition for. How ConvNets are working? What types of things in the image they are looking for? And what kinds of techniques do we have for analyzing this internals of the network? So, one relatively simple thing is the first layer. So, we've seen, we've talked about this before. But recalled that, the first convolutional layer consists of a filters that, so, for example in AlexNet. The first convolutional layer consists of a number of convolutional filters. Each convolutional of filter has shape 3 by 11 by 11. And these convolutional filters gets slid over the input image. We take inner products between some chunk of the image. And the weights of the convolutional filter. And that gives us our output of the at, at after that first convolutional layer. So, in AlexNet then we have 64 of these filters. But now in the first layer because we are taking in a direct inner product between the weights of the convolutional layer and the pixels of the image. We can get some since for what these filters are looking for by simply visualizing the learned weights of these filters as images themselves. So, for each of those 11 by 11 by 3 filters in AlexNet, we can just visualize that filter as a little 11 by 11 image with a three channels give you the red, green and blue values. And then because there are 64 of these filters we just visualize 64 little 11 by 11 images. And we can repeat... So we have shown here at the. So, these are filters taken from the prechain models, in the pi torch model zoo. And we are looking at the convolutional filters. The weights of the convolutional filters. at the first layer of AlexNet, ResNet-18, ResNet-101 and DenseNet-121. And you can see, kind of what all these layers what this filters looking for. You see the lot of things looking for oriented edges. Likes bars of light and dark. At various angles, in various angles and various positions in the input, we can see opposing colors. Like this are green and pink. opposing colors or this orange and blue opposing colors. So, this, this kind of connects back to what we talked about with Hugh and Wiesel. All the way in the first lecture. That remember the human visual system is known to the detect things like oriented edges. At the very early layers of the human visual system. And it turns out of that these convolutional networks tend to do something, somewhat similar. At their first convolutional layers as well. And what's kind of interesting is that pretty much no matter what type of architecture you hook up or whatever type of training data you are train it on. You almost always get the first layers of your. The first convolutional weights of any pretty much any convolutional network looking at images. Ends up looking something like this with oriented edges and opposing colors. Looking at that input image. But this really only, sorry what was that question? Yes, these are showing the learned weights of the first convolutional layer. Oh, so that the question is. Why does visualizing the weights of the filters? Tell you what the filter is looking for. So this intuition comes from sort of template matching and inner products. That if you imagine you have some, some template vector. And then you imagine you compute a scaler output by taking inner product between your template vector and some arbitrary piece of data. Then, the input which maximizes that activation. Under a norm constraint on the input is exactly when those two vectors match up. So, in that since that, when, whenever you're taking inner products, the thing causes an inner product to excite maximally is a copy of the thing you are taking an inner product with. So, that, that's why we can actually visualize these weights and that, why that shows us, what this first layer is looking for. So, for these networks the first layers always was a convolutional layer. So, generally whenever you are looking at image. Whenever you are thinking about image data and training convolutional networks, you generally put a convolutional layer at the first, at the first stop. Yeah, so the question is, can we do this same type of procedure in the middle open network. That's actually the next slide. So, good anticipation. So, if we do, if we draw this exact same visualization for the intermediate convolutional layers. It's actually a lot less interpretable. So, this is, this is performing exact same visualization. So, remember for this using the tiny ConvNets demo network that's running on the course website whenever you go there. So, for that network, the first layer is 7 by 7 convulsion 16 filters. So, after the top visualizing the first layer weights for this network just like we saw in a previous slide. But now at the second layer weights. After we do a convulsion then there's some relu and some other non-linearity perhaps. But the second convolutional layer, now receives the 16 channel input. And does 7 by 7 convulsion with 20 convolutional filters. And we've actually, so the problem is that you can't really visualize these directly as images. So, you can try, so, here if you this 16 by, so the input is this has 16 dimensions in depth. And we have these convolutional filters, each convolutional filter is 7 by 7, and is extending along the full depth so has 16 elements. Then we've 20 such of these convolutional filters, that are producing the output planes of the next layer. But the problem here is that we can't, looking at the, looking directly at the weights of these filters, doesn't really tell us much. So, we, that's really done here is that, now for this single 16 by 7 by 7 convolutional filter. We can spread out those 167 by 7 planes of the filter into a 167 by 7 grayscale images. So, that's what we've done. Up here, which is these little tiny gray scale images here show us what is, what are the weights in one of the convolutional filters of the second layer. And now, because there are 20 outputs from this layer. Then this second convolutional layer, has 2o such of these 16 by 16 or 16 by 7 by 7 filters. So if we visualize the weights of those convolutional filters as images, you can see that there are some kind of spacial structures here. But it doesn't really give you good intuition for what they are looking at. Because these filters are not looking, are not connected directly to the input image. Instead recall that the second layer convolutional filters are connected to the output of the first layer. So, this is giving visualization of, what type of activation pattern after the first convulsion, would cause the second layer convulsion to maximally activate. But, that's not very interpretable because we don't have a good sense for what those first layer convulsions look like in terms of image pixels. So we'll need to develop some slightly more fancy technique to get a sense for what is going on in the intermediate layers. Question in the back. Yeah. So the question is that for... all the visualization on this on the previous slide. We've had the scale the weights to the zero to 255 range. So in practice those weights could be unbounded. They could have any range. But to get nice visualizations we need to scale those. These visualizations also do not take in to account the bias is in these layers. So you should keep that in mind when and not take these HEPS visualizations to, to literally. Now at the last layer remember when we looking at the last layer of convolutional network. We have these maybe 1000 class scores that are telling us what are the predicted scores for each of the classes in our training data set and immediately before the last layer we often have some fully connected layer. In the case of Alex net we have some 4096- dimensional features representation of our image that then gets fed into that final our final layer to predict our final class scores. And one another, another kind of route for tackling the problem of visual, visualizing and understanding ConvNets is to try to understand what's happening at the last layer of a convolutional network. So what we can do is how to take some, some data set of images run a bunch of, run a bunch of images through our trained convolutional network and recorded that 4096 dimensional vector for each of those images. And now go through and try to figure out and visualize that last layer, that last hidden layer rather than those rather than the first convolutional layer. So, one thing you might imagine is, is trying a nearest neighbor approach. So, remember, way back in the second lecture we saw this graphic on the left where we, where we had a nearest neighbor classifier. Where we were looking at nearest neighbors in pixels space between CIFAR 10 images. And then when you look at nearest neighbors in pixel space between CIFAR 10 images you see that you pull up images that looks quite similar to the query image. So again on the left column here is some CIFAR 10 image from the CIFAR 10 data set and then these, these next five columns are showing the nearest neighbors in pixel space to those test set images. And so for example this white dog that you see here, it's nearest neighbors are in pixel space are these kinds of white blobby things that may, may or may not be dogs, but at least the raw pixels of the image are quite similar. So now we can do the same type of visualization computing and visualizing these nearest neighbor images. But rather than computing the nearest neighbors in pixel space, instead we can compute nearest neighbors in that 4096 dimensional feature space. Which is computed by the convolutional network. So here on the right we see some examples. So this, this first column shows us some examples of images from the test set of image that... Of the image net classification data set and now the, these subsequent columns show us nearest neighbors to those test set images in the 4096, in the 4096th dimensional features space computed by Alex net. And you can see here that this is quite different from the pixel space nearest neighbors, because the pixels are often quite different. between the image in it's nearest neighbors and feature space. However, the semantic content of those images tends to be similar in this feature space. So for example, if you look at this second layer the query image is this elephant standing on the left side of the image with a screen grass behind him. and now one of these, one of these... it's third nearest neighbor in the tough set is actually an elephant standing on the right side of the image. So this is really interesting. Because between this elephant standing on the left and this element stand, elephant standing on the right the pixels between those two images are almost entirely different. However, in the feature space which is learned by the network those two images and that being very close to each other. Which means that somehow this, this last their features is capturing some of those semantic content of these images. That's really cool and really exciting and, and in general looking at these kind of nearest neighbor visualizations is really quick and easy way to visualize something about what's going on here. Yes. So the question is that through the... the standard supervised learning procedure for classific training, classification network There's nothing in the loss encouraging these features to be close together. So that, that's true. It just kind of a happy accident that they end up being close to each other. Because we didn't tell the network during training these features should be close. However there are sometimes people do train networks using things called either contrastive loss or a triplet loss. Which actually explicitly make... assumptions and constraints on the network such that those last their features end up having some metric space interpretation. But Alex net at least was not trained specifically for that. The question is, what is the nearest... What is this nearest neighbor thing have to do at the last layer? So we're taking this image we're running it through the network and then the, the second to last like the last hidden layer of the network is of 4096th dimensional vector. Because there's this, this is... This is there, there are these fully connected layers at the end of the network. So we are doing is... We're writing down that 4096th dimensional vector for each of the images and then we are computing nearest neighbors according to that 4096th dimensional vector. Which is computed by, computed by the network. Maybe, maybe we can chat offline. So another, another, another another angle that we might have for visualizing what's going on in this last layer is by some concept of dimensionality reduction. So those of you who have taken CS229 for example you've seen something like PCA. Which let's you take some high dimensional representation like these 4096th dimensional features and then compress it down to two-dimensions. So then you can visualize that feature space more directly. So, Principle Component Analysis or PCA is kind of one way to do that. But there's real another really powerful algorithm called t-SNE. Standing for t-distributed stochastic neighbor embeddings. Which is slightly more powerful method. Which is a non-linear dimensionality reduction method that people in deep often use for visualizing features. So here as an, just an example of what t-SNE can do. This visualization here is, is showing a t-SNE dimensionality reduction on the emnest data set. So, emnest remember is this date set of hand written digits between zero and nine. Each image is a gray scale image 20... 28 by 28 gray scale image and now we're... So that Now we've, we've used t-SNE to take that 28 times 28 dimensional features space of the raw pixels for m-nest and now compress it down to two- dimensions ans then visualize each of those m-nest digits in this compress two-dimensional representation and when you do, when you run t-SNE on the raw pixels and m-nest You can see these natural clusters appearing. Which corresponds to the, the digits of these m-nest of, of these m-nest data set. So now we can do a similar type of visualization. Where we apply this t-SNE dimensionality reduction technique to the features from the last layer of our trained image net classifier. So...To be a little bit more concrete here what we've done is that we take, a large set of images we run them off convolutional network. We record that final 4096th dimensional feature vector for, from the last layer of each of those images. Which gives us large collection of 4096th dimensional vectors. Now we apply t-SNE dimensionality reduction to compute, sort of compress that 4096the dimensional features space down into a two-dimensional feature space and now we, layout a grid in that compressed two-dimensional feature space and visualize what types of images appear at each location in the grid in this two-dimensional feature space. So by doing this you get some very close rough sense of what the geometry of this learned feature space looks like. So these images are little bit hard to see. So I'd encourage you to check out the high resolution versions online. But at least maybe on the left you can see that there's sort of one cluster in the bottom here of, of green things, is a different kind of flowers and there's other types of clusters for different types of dog breeds and another types of animals and, and locations. So there's sort of discontinuous semantic notion in this feature space. Which we can explore by looking through this t-SNE dimensionality reduction version of the, of the features. Is there question? Yeah. So the basic idea is that we're we, we have an image so now we end up with three different pieces of information about each image. We have the pixels of the image. We have the 4096th dimensional vector. Then we use t-SNE to convert the 4096th dimensional vector into a two-dimensional coordinate and then we take the original pixels of the image and place that at the two-dimensional coordinate corresponding to the dimensionality reduced version of the 4096th dimensional feature. Yeah, little bit involved here. Question in the front. The question is Roughly how much variants do these two-dimension explain? Well, I'm not sure of the exact number and I get little bit muddy when you're talking about t-SNE, because it's a non-linear dimensionality reduction technique. So, I'd have to look offline and I'm not sure of exactly how much it explains. Question? Question is, can you do the same analysis of upper layers of the network? And yes, you can. But no, I don't have those visualizations here. Sorry. Question? The question is, Shouldn't we have overlaps of images once we do this dimensionality reduction? And yes, of course, you would. So this is just kind of taking a, nearest neighbor in our, in our regular grid and then picking an image close to that grid point. So, so... they, yeah. this is not showing you the kind of density in different parts of the feature space. So that's, that's another thing to look at and again at the link you, there's a couple more visualizations of this nature that, that address that a little bit. Okay. So another, another thing that you can do for some of these intermediate features is, so we talked a couple of slides ago that visualizing the weights of these intermediate layers is not so interpretable. But actually visualizing the activation maps of those intermediate layers is kind of interpretable in some cases. So for, so I, again an example of Alex Net. Remember the, the conv5 layers of Alex Net. Gives us this 128 by... The for...The conv5 features for any image is now 128 by 13 by 13 dimensional tensor. But we can think of that as 128 different 13 by 132-D grids. So now we can actually go and visualize each of those 13 by 13 elements slices of the feature map as a grayscale image and this gives us some sense for what types of things in the input are each of those features in that convolutional layer looking for. So this is a, a really cool interactive tool by Jason Yasenski you can just download. So it's run, so I don't have the video, it has a video on his website. But it's running a convolutional network on the inputs stream of webcam and then visualizing in real time each of those slices of that intermediate feature map give you a sense of what it's looking for and you can see that, so here the input image is this, this picture up in, settings... of this picture of a person in front of the camera and most of these intermediate features are kind of noisy, not much going on. But there's a, but there's this one highlighted intermediate feature where that is also shown larger here that seems that it's activating on the portions of the feature map corresponding to the person's face. Which is really interesting and that kind of, suggests that maybe this, this particular slice of the feature map of this layer of this particular network is maybe looking for human faces or something like that. Which is kind of a nice, kind of a nice and cool finding. Question? The question is, Are the black activations dead relu's? So you got to be... a little careful with terminology. We usually say dead relu to mean something that's dead over the entire training data set. Here I would say that it's a relu, that, it's not active for this particular input. Question? The question is, If there's no humans in image net how can it recognize a human face? There definitely are humans in image net I don't think it's, it's one of the cat... I don't think it's one of the thousand categories for the classification challenge. But people definitely appear in a lot of these images and that can be useful signal for detecting other types of things. So that's actually kind of nice results because that shows that, it's sort of can learn features that are useful for the classification task at hand. That are even maybe a little bit different from the explicit classification task that we told it to perform. So it's actually really cool results. Okay, question? So at each layer in the convolutional network our input image is of three, it's like 3 by 224 by 224 and then it goes through many stages of convolution. And then, it, after each convolutional layer is some three dimensional chunk of numbers. Which are the outputs from that layer of the convolutional network. And that into the entire three dimensional chunk of numbers which are the output of the previous convolutional layer, we call, we call, like an activation volume and then one of those, one of those slices is a, it's an activation map. So the question is, If the image is K by K will the activation map be K by K? Not always because there can be sub sampling due to pool, straight convolution and pooling. But in general, the, the size of each activation map will be linear in the size of the input image. So another, another kind of useful thing we can do for visualizing intermediate features is... Visualizing what types of patches from input images cause maximal activation in different, different features, different neurons. So what we've done here is that, we pick... Maybe again the con five layer from Alex Net? And remember each of these activation volumes at the con, at the con five in Alex net gives us a 128 by 13 by 13 chunk of numbers. Then we'll pick one of those 128 channels. Maybe channel 17 and now what we'll do is run many images through this convolutional network. And then, for each of those images record the con five features and then look at the... Right, so, then, then look at the, the... The parts of that 17th feature map that are maximally activated over our data set of images. And now, because again this is a convolutional layer each of those neurons in the convolutional layer has some small receptive field in the input. Each of those neurons is not looking at the whole image. They're only looking at the sub set of the image. Then what we'll do is, is visualize the patches from the, from this large data set of images corresponding to the maximal activations of that, of that feature, of that particular feature in that particular layer. And then we can sorts these out, sort these patches by their activation at that, at that particular layer. So here is a, some examples from this... Network called a, fully... The network doesn't matter. But these are some visualizations of these kind of maximally activating patches. So, each, each row gives... We've chosen one layer from or one neuron from one layer of a network and then each, and then, the, they're sorted of these are the patches from some large data set of images. That maximally activated this one neuron. And these can give you a sense for what type of features these, these neurons might be looking for. So for example, this top row we see a lot of circly kinds of things in the image. Some eyes, some, mostly eyes. But also this, kind of blue circly region. So then, maybe this, this particular neuron in this particular layer of this network is looking for kind of blue circly things in the input. Or maybe in the middle here we have neurons that are looking for text in different colors or, or maybe curving, curving edges of different colors and orientations. Yeah, so, I've been a little bit loose with terminology here. So, I'm saying that a neuron is one scaler value in that con five activation map. But because it's convolutional, all the neurons in one channel are all using the same weights. So we've chosen one channel and then, right? So, you get a lot of neurons for each convolutional filter at any one layer. So, we, we could have been, so this patches could've been drawn from anywhere in the image due to the convolutional nature of the thing. And now at the bottom we also see some maximally activating patches for neurons from a higher up layer in the same network. And now because they are coming from higher in the network they have a larger receptive field. So, they're looking at larger patches of the input image and we can also see that they're looking for maybe larger structures in the input image. So this, this second row is maybe looking, it seems to be looking for human, humans or maybe human faces. We have maybe something looking for... Parts of cameras or different types of larger, larger, larger object like type things, types of things. Another, another cool experiment we can do which comes from Zeiler and Fergus ECCV 2014 paper. is this idea of an exclusion experiment. So, what we want to do is figure out which parts of the input, of the input image cause the network to make it's classification decision. So, what we'll do is, we'll take our input image in this case an elephant and then we'll block out some part of that, some region in that input image and just replace it with the mean pixel value from the data set. And now, run that occluded image throughout, through the network and then record what is the predicted probability of this occluded image? And now slide this occluded patch over every position in the input image and then repeat the same process. And then draw this heat map showing, what was the predicted probability output from the network as a function of where did, which part of the input image did we occlude? And the idea is that if when we block out some part of the image if that causes the network score to change drastically. Then probably that part of the input image was really important for the classification decision. So here we've shown... I've shown three different examples of... Of this occlusion type experiment. So, maybe this example of a Go-kart at the bottom, you can see over here that when we, so here, red, the, the red corresponds to a low probability and the white and yellow corresponds to a high probability. So when we block out the region of the image corresponding to this Go-kart in front. Then the predicted probability for the Go-kart class drops a lot. So that gives us some sense that the network is actually caring a lot about these, these pixels in the input image in order to make it's classification decision. Question? Yes, the question is that, what's going on in the background? So maybe if the image is a little bit too small to tell but, there's, this is actually a Go-kart track and there's a couple other Go-karts in the background. So I think that, when you're blocking out these other Go-karts in the background, that's also influencing the score or maybe like the horizon is there and maybe the horizon is an useful feature for detecting Go-karts, it's a little bit hard to tell sometimes. But this is a pretty cool visualization. Yeah, was there another question? So the question is, sorry, sorry, what was the first question? So, the, so the question... So for, for this example we're taking one image and then masking all parts of one image. The second question was, how is this useful? It's not, maybe, you don't really take this information and then loop it directly into the training process. Instead, this is a way, a tool for humans to understand, what types of computations these train networks are doing. So it's more for your understanding than for improving performance per se. So another, another related idea is this concept of a Saliency Map. Which is something that you will see in your homeworks. So again, we have the same question of given an input image of a dog in this case and the predicted class label of dog we want to know which pixels in the input image are important for classification. We saw masking, is one way to get at this question. But Saliency Maps are another, another, angle for attacking this problem. And the question is, and one relatively simple idea from Karen Simonenian's paper, a couple years ago. Is, this is just computing the gradient of the predicted class score with respect to the pixels of the input image. And this will directly tell us in this sort of, first order approximation sense. For each input, for each pixel in the input image if we wiggle that pixel a little bit then how much will the classification score for the class change? And this is another way to get at this question of which pixels in the input matter for the classification. And when we, and when we run for example Saliency, where computer Saliency map for this dog, we see kind of a nice outline of a dog in the image. Which tells us that these are probably the pixels of that, network is actually looking at, for this image. And when we repeat this type of process for different images, we get some sense that the network is sort of looking at the right regions. Which is somewhat comforting. Question? The question is, do people use Saliency Maps for semantic segmentation? The answer is yes. That actually was... Yeah, you guys are like really on top of it this lecture. So that was another component, again in Karen's paper. Where there's this idea that maybe you can use these Saliency Maps to perform semantic segmentation without direct, without any labeled data for the, for these, for these segments. So here they're using this Grabcut Segmentation Algorithm which I don't really want to get into the details of. But it's kind of an interactive segmentation algorithm that you can use. So then when you combine this Saliency Map with this Grabcut Segmentation Algorithm then you can in fact, sometimes segment out the object in the image. Which is really cool. However I'd like to point out that this is a little bit brittle and in general if you, this will probably work much, much, much, worse than a network which did have access to supervision and training time. So, I don't, I'm not sure how, how practical this is. But it is pretty cool that it works at all. But it probably works much less than something trained explicitly to segment with supervision. So kind of another, another related idea is this idea of, of guided back propagation. So again, we still want to answer the question of for one particular, for one particular image. Then now instead of looking at the class score we want to know, we want to pick some intermediate neuron in the network and ask again, which parts of the input image influence the score of that neuron, that internal neuron in the network. And, and then you could imagine, again you could imagine computing a Saliency Map for this again, right? That rather than computing the gradient of the class scores with respect to the pixels of the image. You could compute the gradient of some intermediate value in the network with respect to the pixels of the image. And that would tell us again which parts, which pixels in the input image influence that value of that particular neuron. And that would be using normal back propagation. But it turns out that there is a slight tweak that we can do to this back propagation procedure that ends up giving some slightly cleaner images. So that's this idea of guided back propagation that again comes from Zeiler and Fergus's 2014 paper. And I don't really want to get into the details too much here but, it, you just, it's kind of weird tweak where you change the way that you back propagate through relu non-linearities. And you sort of, only, only back propagate positive gradients through relu's and you do not back propagate negative gradients through the relu's. So you're no longer computing the true gradient instead you're kind of only keeping track of positive influences on throughout the entire network. So maybe you should read through these, these papers reference to your, if you want a little bit more details about why that's a good idea. But empirically, when you do guided back propagation as appose to regular back propagation. You tend to get much cleaner, nicer images. that tells you, which part, which pixel of the input image influence that particular neuron. So, again we were seeing the same visualization we saw a few slides ago of the maximally activating patches. But now, in addition to visualizing these maximally activating patches. We've also performed guided back propagation, to tell us exactly which parts of these patches influence the score of that neuron. So, remember for this example at the top, we saw that, we thought this neuron is may be looking for circly tight things, in the input patch because there're allot of circly tight patches. Well, when we look at guided back propagation We can see with that intuition is somewhat confirmed because it is indeed the circly parts of that input patch which are influencing that, that neuron value. So, this is kind of a useful to all for synthesizing. For understanding what these different intermediates are looking for. But, one kind of interesting thing about guided back propagation or computing saliency maps. Is that there's always a function of fixed input image, right, they're telling us for a fixed input image, which pixel or which parts of that input image influence the value of the neuron. Another question you might answer is is remove this reliance, on that, on some input image. And then instead just ask what type of input in general would cause this neuron to activate and we can answer this question using a technical Gradient ascent so, remember we always use Gradient decent to train our convolutional networks by minimizing the loss. Instead now, we want to fix the, fix the weight of our trained convolutional network and instead synthesizing image by performing Gradient ascent on the pixels of the image to try and maximize the score of some intermediate neuron or of some class. So, in a process of Gradient ascent, we're no longer optimizing over the weights of the network those weights remained fixed instead we're trying to change pixels of some input image to cause this neuron, or this neuron value, or this class score to maximally, to be maximized but, instead but, in addition we need some regularization term so, remember we always a, we before seeing regularization terms to try to prevent the network weights from over fitting to the training data. Now, we need something kind of similar to prevent the pixels of our generated image from over fitting to the peculiarities of that particular network. So, here we'll often incorporate some regularization term that, we're kind of, we want a generated image of two properties one, we wanted to maximally activate some, some score or some neuron value. But, we also wanted to look like a natural image. we wanted to kind of have, the kind of statistics that we typically see in natural images. So, these regularization term in the subjective is something to enforce a generated image to look relatively natural. And we'll see a couple of different examples of regualizers as we go through. But, the general strategy for this is actually pretty simple and again informant allot of things of this nature on your assignment three. But, what we'll do is start with some initial image either initializing to zeros or to uniform or noise. But, initialize your image in some way and I'll repeat where you forward your image through 3D network and compute the score or, or neuron value that you're interested. Now, back propagate to compute the Gradient of that neuron score with respect to the pixels of the image and then make a small Gradient ascent or Gradient ascent update to the pixels of the images itself. To try and maximize that score. And I'll repeat this process over and over again, until you have a beautiful image. And, then we talked, we talked about the image regularizer, well a very simple, a very simple idea for image regularizer is simply to penalize L2 norm of a generated image This is not so semantically meaningful, it's just does something, and this was one of the, one of the earliest regularizer that we've seen in the literature for these type of generating images type of papers. And, when you run this on a trained network you can see that now we're trying to generate images that maximize the dumble score in the upper left hand corner here for example. And, then you can see that the synthesized image, it been, it's little bit hard to see may be but there're allot of different dumble like shapes, all kind of super impose that different portions of the image. or if we try to generate an image for cups then we can may be see a bunch of different cups all kind of super imposed the Dalmatian is pretty cool, because now we can see kind of this black and white spotted pattern that's kind of characteristics of Dalmatians or for lemons we can see these different kinds of yellow splotches in the image. And there's a couple of more examples here, I think may be the goose is kind of cool or the kitfox are actually may be looks like kitfox. Question? The question is, why are these all rainbow colored and in general getting true colors out of this visualization is pretty tricky. Right, because any, any actual image will be bounded in the range zero to 255. So, it really should be some kind of constrained optimization problem But, if, for using this generic methods for Gradient ascent then we, that's going to be unconstrained problem. So, you may be use like projector Gradient ascent algorithm or your rescaled image at the end. So, the colors that you see in this visualizations, sometimes are you cannot take them too seriously. Question? The question is what happens, if you let the thing loose and don't put any regularizer on it. Well, then you tend to get an image which maximize the score which is confidently classified as the class you wanted but, usually it doesn't look like anything. It kind of look likes random noise. So, that's kind of an interesting property in itself that will go into much more detail in a future lecture. But, that's why, that kind of doesn't help you so much for understanding what things the network is looking for. So, if we want to understand, why the network thing makes its decisions then it's kind of useful to put regularizer on there to generate an image to look more natural. A question in the back. Yeah, so the question is that we see a lot of multi modality here, and other ways to combat that. And actually yes, we'll see that, this is kind of first step in the whole line of work in improving these visualizations. So, another, another kind of, so then the angle here is a kind of to improve the regularizer to improve our visualized images. And there's a another paper from Jason Yesenski and some of his collaborators where they added some additional impressive regularizers. So, in addition to this L2 norm constraint, in addition we also periodically during optimization, and do some gauche and blurring on the image, we're also clip some,. some small value, some small pixel values all the way to zero, we're also clip some of the, some of the pixel values of low Gradients to zero So, you can see this is kind of a projector Gradient ascent algorithm where it reach periodically we're projecting our generated image onto some nicer set of images with some nicer properties. For example, special smoothness with respect to the gauchian blurring So, when you do this, you tend to get much nicer images that are much clear to see. So, now these flamingos look like flamingos the ground beetle is starting to look more beetle like or this black swan maybe looks like a black swan. These billiard tables actually look kind of impressive now, where you can definitely see this billiard table structure. So, you can see that once you add in nicer regularizers, then the generated images become a bit, a little bit cleaner. And, now we can perform this procedure not only for the final class course, but also for these intermediate neurons as well. So, instead of trying to maximize our billiard table score for example instead we can get maximize one of the neurons from some intermediate layer Question. So, the question is what's with the for example here, so those who remember initializing our image randomly so, these four images would be different random initialization of the input image. And again, we can use these same type of procedure to visualize, to synthesis images which maximally activate intermediate neurons of the network. And, then you can get a sense from some of these intermediate neurons are looking for, so may be at layer four there's neuron that's kind of looking for spirally things or there's neuron that's may be looking for like chunks of caterpillars it's a little bit harder to tell. But, in generally as you go larger up in the image then you can see that the one, the obviously receptive fields of these neurons are larger. So, you're looking at the larger patches in the image. And they tend to be looking for may be larger structures or more complex patterns in the input image. That's pretty cool. And, then people have really gone crazy with this and trying to, they basically improve these visualization by keeping on extra features So, this was a cool paper kind of explicitly trying to address this multi modality, there's someone asked question about a few minutes ago. So, here they were trying to explicitly take a count, take this multi modality into account in the optimization procedure where they did indeed, I think see the initial, so they for each of the classes, you run a clustering algorithm to try to separate the classes into different modes and then initialize with something that is close to one of those modes. And, then when you do that, you kind of account for this multi modality. so for intuition, on the right here these eight images are all of grocery stores. But, the top row, is kind of close up pictures of produce on the shelf and those are labeled as grocery stores And the bottom row kind of shows people walking around grocery stores or at the checkout line or something like that. And, those are also labeled those as grocery store, but their visual appearance is quiet different. So, a lot of these classes and that being sort multi modal And, if you can take, and if you explicitly take this more time mortality into account when generating images, then you can get nicer results. And now, then when you look at some of their example, synthesis images for classes, you can see like the bell pepper, the card on, strawberries, jackolantern now they end up with some very beautifully generated images. And now, I don't want to get to much into detail of the next slide. But, then you can even go crazier. and add an even stronger image prior and generate some very beautiful images indeed So, these are all synthesized images that are trying to maximize the class score or some image in a class. But, the general idea is that rather than optimizing directly the pixels of the input image, instead they're trying to optimize the FC6 representation of that image instead. And now they need to use some feature inversion network and I don't want to get into the details here. You should read the paper, it's actually really cool But, the point is that, when you start adding additional priors towards modeling natural images and you can end generating some quiet realistic images they gave you some sense of what the network is looking for So, that's, that's sort of one cool thing that we can do with this strategy, but this idea of trying to synthesis images by using Gradients on image pixels, is actually super powerful. And, another really cool thing we can do with this, is this concept of fooling image So, what we can do is pick some arbitrary image, and then try to maximize the, so, say we take it picture of an elephant and then we tell the network that we want to, change the image to maximize the score of Koala bear instead So, then what we were doing is trying to change that image of an elephant to try and instead cause the network to classify as a Koala bear. And, what you might hope for is that, maybe that elephant was sort of thought more thing into a Koala bear and maybe he would sprout little cute ears or something like that. But, that's not what happens in practice, which is pretty surprising. Instead if you take this picture of a elephant and tell them that, tell them that and try to change the elephant image to instead cause it to be classified as a koala bear What you'll find is that, you is that this second image on the right actually is classified as koala bear but it looks the same to us. So that's pretty fishy and pretty surprising. So also on the bottom we've taken this picture of a boat. Schooner is the image in that class and then we told the network to classified as an iPod. So now the second example looks just, still looks like a boat to us but the network thinks it's an iPod and the difference is in pixels between these two images are basically nothing. And if you magnify those differences you don't really see any iPod or Koala like features on these differences, they're just kind of like random patterns of noise. So the question is what's going here? And like how can this possibly the case? Well, we'll have a guest lecture from Ian Goodfellow in a week an half two weeks. And he's going to go in much more detail about this type of phenomenon and that will be really exciting. But I did want to mention it here because it is on your homework. Question? Yeah, so that's something, so the question is can we use fooled images as training data and I think, Ian's going to go in much more detail on all of these types of strategies. Because that's literally, that's really a whole lecture onto itself. Question? The question is why do we care about any of this stuff? Basically... Okay, maybe that was a mischaracterization, I am sorry. Yeah, the question is what is have in the... understanding this intermediate neurons how does that help our understanding of the final classification. So this is actually, this whole field of trying to visualize intermediates is kind of in response to a common criticism of deep learning. So a common criticism of deep learning is like, you've got this big black box network you trained it on gradient ascent, you get a good number and that's great but we don't trust the network because we don't understand as people why it's making the decisions, that's it's making. So a lot of these type of visualization techniques were developed to try and address that and try to understand as people why the network are making their various classification, classification decisions a bit more. Because if you contrast, if you contrast a deep convolutional neural network with other machine running techniques. Like linear models are much easier to interpret in general because you can look at the weights and kind of understand the interpretation between how much each input feature effect the decision or if you look at something like a random forest or decision tree. Some other machine learning models end up being a bit more interpretable just by their very nature then this sort of black box convolutional networks. So a lot of this is sort of in response to that criticism to say that, yes they are these large complex models but they are still doing some interesting and interpretable things under the hood. They are not just totally going out in randomly classifying things. They are doing something meaningful So another cool thing we can do with this gradient based optimization of images is this idea of DeepDream. So this was a really cool blog post that came out from Google a year or two ago. And the idea is that, this is, so we talked about scientific value, this is almost entirely for fun. So the point of this exercise is mostly to generate cool images. And aside, you also get some sense for what features images are looking at. Or these networks are looking at. So we can do is, we take our input image we run it through the convolutional network up to some layer and now we back propagate and set the gradient of that, at that layer equal to the activation value. And now back propagate, back to the image and update the image and repeat, repeat, repeat. So this has the interpretation of trying to amplify existing features that were detected by the network in this image. Right? Because whatever features existed on that layer now we set the gradient equal to the feature and we just tell the network to amplify whatever features you already saw in that image. And by the way you can also see this as trying to maximize the L2 norm of the features at that layer of the image. And it turns... And when you do this the code ends up looking really simple. So your code for many of your homework assignments will probably be about this complex or maybe even a little bit a less so. So the idea is that... But there's a couple of tricks here that you'll also see in your assignments. So one trick is to jitter the image before you compute your gradients. So rather than running the exact image through the network instead you'll shift the image over by two pixels and kind of wrap the other two pixels over here. And this is a kind of regularizer to prevent each of these [mumbling] it regularizers a little bit to encourage a little bit of extra special smoothness in the image. You also see they use L1 normalization of the gradients that's kind of a useful trick sometimes when doing this image generation problems. You also see them clipping the pixel values once in a while. So again we talked about images actually should be between zero to 2.55 so this is a kind of projected gradients decent where we project on to the space of actual valid images. But now when we do all this then we start, we might start with some image of a sky and then we get really cool results like this. So you can see that now we've taken these tiny features on the sky and they get amplified through this, through this process. And we can see things like this different mutant animals start to pop up or these kind of spiral shapes pop up. Different kinds of houses and cars pop up. So that's all, that's all pretty interesting. There's a couple patterns in particular that pop up all the time that people have named. Right, so there's this Admiral dog, that shows up allot. There's the pig snail, the camel bird this the dog fish. Right, so these are kind of interesting, but actually this fact that dog show up so much in these visualization, actually does tell us something about the data on which this network was trained. Right, because this is a network that was trained for image net classification, image that have thousand categories. But 200 of those categories are dogs. So, so it's kind of not surprising in a sense that when you do these kind of visualizations then network ends up hallucinating a lot of dog like stuff in the image often morphed with other types of animals. When you do this other layers of the network you get other types of results. So here we're taking one of these lower layers in the network, the previous example was relatively high up in the network and now again we have this interpretation that lower layers maybe computing edges and swirls and stuff like that and that's kind of borne out when we running DeepDream at a lower layer. Or if you run this thing for a long time and maybe add in some multiscale processing you can get some really, really crazy images. Right, so here they're doing a kind of multiscale processing where they start with a small image run DeepDream on the small image then make it bigger and continue DeepDream on the larger image and kind of repeat with this multiscale processing and then you can get, and then maybe after you complete the final scale then you restart from the beginning and you just go wild on this thing. And you can get some really crazy images. So these examples were all from networks trained on image net There's another data set from MIT called MIT Places Data set but instead of 1,000 categories of objects instead it's 200 different types of scenes like bedrooms and kitchens like stuff like that. And now if we repeat this DeepDream procedure using an network trained at MIT places. We get some really cool visualization as well. So now instead of dogs, slugs and admiral dogs and that's kind of stuff, instead we often get these kind of roof shapes of these kind of Japanese style building or these different types of bridges or mountain ranges. They're like really, really cool beautiful visualizations. So the code for DeepDream is online, released by Google you can go check it out and make your own beautiful pictures So there's another kind of... Sorry question? So the question is, what are taking gradient of? So like I say, if you, because like one over x squared on the gradient of that is x. So, if you send back the volume of activation as the gradient, that's equivalent to max, that's equivalent to taking the gradient with respect to the like one over x squared some... Some of the values. So it's equivalent to maximizing the norm of that of the features of that layer. But in practice many implementation you'll see not explicitly compute that instead of send gradient back. So another kind of useful, another kind of useful thing we can do is this concept of feature inversion. So this again gives us a sense for what types of, what types of elements of the image are captured at different layers of the network. So what we're going to do now is we're going to take an image, run that image through network record the feature value for one of those images and now we're going to try to reconstruct that image from its feature representation. And the question, and now based on the how much, how much like what that reconstructed image looks like that'll give us some sense for what type of information about the image was captured in that feature vector. So again, we can do this with gradient ascent with some regularizer. Where now rather than maximizing some score instead we want to minimize the distance between this catch feature vector. And between the computed features of our generated image. To try and again synthesize a new image that matches the feature back to that we computed before. And another kind of regularizer that you frequently see here is the total variation regularizer that you also see on your homework. So here with the total variation regularizer is panelizing differences between adjacent pixels on both of the left and adjacent in left and right and adjacent top to bottom. To again try to encourage special smoothness in the generated image. So now if we do this idea of feature inversion so this visualization here on the left we're showing some original image. The elephants or the fruits at the left. And then we run that, we run the image through a VGG-16 network. Record the features of that network at some layer and then try to synthesize a new image that matches the recorded features of that layer. And this is, this kind of give us a sense for what how much information is stored in this images. In these features of different layers. So for example if we try to reconstruct the image based on the relu2_2 features from VGC's, from VGG-16. We see that the image gets almost perfectly reconstructed. Which means that we're not really throwing away much information about the raw pixel values at that layer. But as we move up into the deeper parts of the network and try to reconstruct from relu4_3, relu5_1. We see that our reconstructed image now, we've kind of kept the general space, the general spatial structure of the image. You can still tell that, that it's a elephant or a banana or a, or an apple. But a lot of the low level details aren't exactly what the pixel values were and exactly what the colors were, exactly what the textures were. These are kind of low level details are kind of lost at these higher layers of this network. So that gives us some sense that maybe as we move up through the flairs of the network it's kind of throwing away this low level information about the exact pixels of the image and instead is maybe trying to keep around a little bit more semantic information, it's a little bit invariant for small changes in color and texture and things like that. So we're building towards a style transfer here which is really cool. So in addition to understand style transfer, in addition to feature inversion. We also need to talk about a related problem called texture synthesis. So in texture synthesis, this is kind of an old problem in computer graphics. Here the idea is that we're given some input patch of texture. Something like these little scales here and now we want to build some model and then generate a larger piece of that same texture. So for example, we might here want to generate a large image containing many scales that kind of look like input. And this is again a pretty old problem in computer graphics. There are nearest neighbor approaches to textual synthesis that work pretty well. So, there's no neural networks here. Instead, this kind of a simple algorithm where we march through the generated image one pixel at a time in scan line order. And then copy... And then look at a neighborhood around the current pixel based on the pixels that we've already generated and now compute a nearest neighbor of that neighborhood in the patches of the input image and then copy over one pixel from the input image. So, maybe you don't need to understand the details here just the idea is that there's a lot classical algorithms for texture synthesis, it's a pretty old problem but you can do this without neural networks basically. And when you run this kind of this kind of classical texture synthesis algorithm it actually works reasonably well for simple textures. But as we move to more complex textures these kinds of simple methods of maybe copying pixels from the input patch directly tend not to work so well. So, in 2015, there was a really cool paper that tried to apply neural network features to this problem of texture synthesis. And ended up framing it as kind of a gradient ascent procedure, kind of similar to the feature map, the various feature matching objectives that we've seen already. So, in order to perform neural texture synthesis they use this concept of a gram matrix. So, what we're going to do, is we're going to take our input texture and in this case some pictures of rocks and then take that input texture and pass it through some convolutional neural network and pull out convolutional features at some layer of the network. So, maybe then this convolutional feature volume that we've talked about, might be H by W by C or sorry, C by H by W at that layer of the network. So, you can think of this as an H by W spacial grid. And at each point of the grid, we have this C dimensional feature vector describing the rough appearance of that image at that point. And now, we're going to use this activation map to compute a descriptor of the texture of this input image. So, what we're going to do is take, pick out two of these different feature columns in the input volume. Each of these feature columns will be a C dimensional vector. And now take the outer product between those two vectors to give us a C by C matrix. This C by C matrix now tells us something about the co-occurrence of the different features at those two points in the image. Right, so, if an element, if like element IJ in the C by C matrix is large that means both elements I and J of those two input vectors were large and something like that. So, this somehow captures some second order statistics about which features, in that feature map tend to activate to together at different spacial volumes... At different spacial positions. And now we're going to repeat this procedure using all different pairs of feature vectors from all different points in this H by W grid. Average them all out, and that gives us our C by C gram matrix. And this is then used a descriptor to describe kind of the texture of that input image. So, what's interesting about this gram matrix is that it has now thrown away all spacial information that was in this feature volume. Because we've averaged over all pairs of feature vectors at every point in the image. Instead, it's just capturing the second order co-occurrence statistics between features. And this ends up being a nice descriptor for texture. And by the way, this is really efficient to compute. So, if you have a C by H by W three dimensional tensure you can just reshape it to see times H by W and take that times its own transpose and compute this all in one shot so it's super efficient. But you might be wondering why you don't use an actual covariance matrix or something like that instead of this funny gram matrix and the answer is that using covariance... Using true covariance matrices also works but it's a little bit more expensive to compute. So, in practice a lot of people just use this gram matrix descriptor. So then... Then there's this... Now once we have this sort of neural descriptor of texture then we use a similar type of gradient ascent procedure to synthesize a new image that matches the texture of the original image. So, now this looks kind of like the feature reconstruction that we saw a few slides ago. But instead, I'm trying to reconstruct the whole feature map from the input image. Instead, we're just going to try and reconstruct this gram matrix texture descriptor of the input image instead. So, in practice what this looks like is that well... You'll download some pretrained model, like in feature inversion. Often, people will use the VGG networks for this. You'll feed your... You'll take your texture image, feed it through the VGG network, compute the gram matrix and many different layers of this network. Then you'll initialize your new image from some random initialization and then it looks like gradient ascent again. Just like for these other methods that we've seen. So, you take that image, pass it through the same VGG network, Compute the gram matrix at various layers and now compute loss as the L2 norm between the gram matrices of your input texture and your generated image. And then you back prop, and compute pixel... A gradient of the pixels on your generated image. And then make a gradient ascent step to update the pixels of the image a little bit. And now, repeat this process many times, go forward, compute your gram matrices, compute your losses, back prop.. Gradient on the image and repeat. And once you do this, eventually you'll end up generating a texture that matches your input texture quite nicely. So, this was all from Nip's 2015 paper by a group in Germany. And they had some really cool results for texture synthesis. So, here on the top, we're showing four different input textures. And now, on the bottom, we're showing doing this texture synthesis approach by gram matrix matching. Using, by computing the gram matrix at different layers at this pretrained convolutional network. So, you can see that, if we use these very low layers in the convolutional network then we kind of match the general... We generally get splotches of the right colors but the overall spacial structure doesn't get preserved so much. And now, as we move to large down further in the image and you compute these gram matrices at higher layers you see that they tend to reconstruct larger patterns from the input image. For example, these whole rocks or these whole cranberries. And now, this works pretty well that now we can synthesize these new images that kind of match the general spacial statistics of your inputs. But they are quite different pixel wise from the actual input itself. Question? So, the question is, where do we compute the loss? And in practice, we want to get good results typically people will compute gram matrices at many different layers and then the final loss will be a sum of all those potentially a weighted sum. But I think for this visualization, to try to pin point the effect of the different layers I think these were doing reconstruction from just one layer. So, now something really... Then, then they had a really brilliant idea kind of after this paper which is, what if we do this texture synthesis approach but instead of using an image like rocks or cranberries what if we set it equal to a piece of artwork. So then, for example, if you... If you do the same texture synthesis algorithm by maximizing gram matrices, but instead of... But now we take, for example, Vincent Van Gogh's Starry night or the Muse by Picasso as our texture... As our input texture, and then run this same texture synthesis algorithm. Then we can see our generated images tend to reconstruct interesting pieces from those pieces of artwork. And now, something really interesting happens when you combine this idea of texture synthesis by gram matrix matching with feature inversion by feature matching. And then this brings us to this really cool algorithm called style transfer. So, in style transfer, we're going to take two images as input. One, we're going to take a content image that will guide like what type of thing we want. What we generally want our output to look like. Also, a style image that will tell us what is the general texture or style that we want our generated image to have and then we will jointly do feature recon... We will generate a new image by minimizing the feature reconstruction loss of the content image and the gram matrix loss of the style image. And when we do these two things we a get a really cool image that kind of renders the content image kind of in the artistic style of the style image. And now this is really cool. And you can get these really beautiful figures. So again, what this kind of looks like is that you'll take your style image and your content image pass them into your network to compute your gram matrices and your features. Now, you'll initialize your output image with some random noise. Go forward, compute your losses go backward, compute your gradients on the image and repeat this process over and over doing gradient ascent on the pixels of your generated image. And after a few hundred iterations, generally you'll get a beautiful image. So, I have implementation of this online on my Gethub, that a lot of people are using. And it's really cool. So, you can, this is kind of... Gives you a lot more control over the generated image as compared to DeepDream. Right, so in DeepDream, you don't have a lot of control about exactly what types of things are going to happen coming out at the end. You just kind of pick different layers of the networks maybe set different numbers of iterations and then dog slugs pop up everywhere. But with style transfer, you get a lot more fine grain control over what you want the result to look like. Right, by now, picking different style images with the same content image you can generate whole different types of results which is really cool. Also, you can play around with the hyper parameters here. Right, because we're doing a joint reconstruct... We're minimizing this feature reconstruction loss of the content image. And this gram matrix reconstruction loss of the style image. If you trade off the constant, the waiting between those two terms and the loss. Then you can get control about how much we want to match the content versus how much we want to match the style. There's a lot of other hyper parameters you can play with. For example, if you resize the style image before you compute the gram matrix that can give you some control over what the scale of features are that you want to reconstruct from the style image. So, you can see that here, we've done this same reconstruction the only difference is how big was the style image before we computed the gram matrix. And this gives you another axis over which you can control these things. You can also actually do style transfer with multiple style images if you just match sort of multiple gram matrices at the same time. And that's kind of a cool result. We also saw this multi-scale process... So, another cool thing you can do. We talked about this multi-scale processing for DeepDream and saw how multi scale processing in DeepDream can give you some really cool resolution results. And you can do a similar type of multi-scale processing in style transfer as well. So, then we can compute images like this. That a super high resolution, this is I think a 4k image of our favorite school, like rendered in the style of Starry night. But this is actually super expensive to compute. I think this one took four GPU's. So, a little expensive. We can also other style, other style images. And get some really cool results from the same content image. Again, at high resolution. Another fun thing you can do is you know, you can actually do joint style transfer and DeepDream at the same time. So, now we'll have three losses, the content loss the style loss and this... And this DeepDream loss that tries to maximize the norm. And get something like this. So, now it's Van Gogh with the dog slug's coming out everywhere. [laughing] So, that's really cool. But there's kind of a problem with this style transfer for algorithms which is that they are pretty slow. Right, you need to produce... You need to compute a lot of forward and backward passes through your pretrained network in order to complete these images. And especially for these high resolution results that we saw in the previous slide. Each forward and backward pass of a 4k image is going to take a lot of compute and a lot of memory. And if you need to do several hundred of those iterations generating these images could take many, like tons of minutes even on a powerful GPU. So, it's really not so practical to apply these things in practice. The solution is to now, train another neural network to do the style transfer for us. So, I had a paper about this last year and the idea is that we're going to fix some style that we care about at the beginning. In this case, Starry night. And now rather than running a separate optimization procedure for each image that we want to synthesize instead we're going to train a single feed forward network that can input the content image and then directly output the stylized result. And now the way that we train this network is that we compute the same content and style losses during training of our feed forward network and use that same gradient to update the weights of the feed forward network. And now this thing takes maybe a few hours to train but once it's trained, then in order to produce stylized images you just need to do a single forward pass through the trained network. So, I have a code for this online and you can see that it ends up looking about... Relatively comparable quality in some cases to this very slow optimization base method but now it runs in real time it's about a thousand times faster. So, here you can see, this is like a demo of it running live off my webcam. So, this is not running live right now obviously, but if you have a big GPU you can easily run four different styles in real time all simultaneously because it's so efficient. There was... There was another group from Russia that had a very similar out... That had a very similar paper concurrently and their results are about as good. They also had this kind of tweek on the algorithm. So, this feed forward network that we're training ends up looking a lot like these... These segmentation models that we saw. So, these segmentation networks, for semantic segmentation we're doing down sampling and then many, and then many layers then some up sampling [mumbling] With transposed convulsion in order to down sample an up sample to be more efficient. The only difference is that this final layer produces a three channel output for the RGB of that final image. And inside this network, we have batch normalization in the various layers. But in this paper, they introduce... They swap out the batch normalization for something else called instance normalization tends to give you much better results. So, one drawback of these types of methods is that we're now training one new style transfer network... For every... For style that we want to apply. So that could be expensive if now you need to keep a lot of different trained networks around. So, there was a paper from Google that just came... Pretty recently that addressed this by using one feed forward trained network to apply many different styles to the input image. So now, they can train one network to apply many different styles at test time using one trained network. So, here's it's going to take the content images input as well as the identity of the style you want to apply and then this is using one network to apply many different types of styles. And again, runs in real time. That same algorithm can also do this kind of style blending in real time with one trained network. So now, once you trained this network on these four different styles you can actually specify a blend of these styles to be applied at test time which is really cool. So, these kinds of real time style transfer methods are on various apps and you can see these out in practice a lot now these days. So, kind of the summary of what we've seen today is that we've talked about many different methods for understanding CNN representations. We've talked about some of these activation based methods like nearest neighbor, dimensionality reduction, maximal patches, occlusion images to try to understand based on the activation values of what the features are looking for. We also talked about a bunch of gradient based methods where you can use gradients to synthesize new images to understand your features such as saliency maps class visualizations, fooling images, feature inversion. And we also had fun by seeing how a lot of these similar ideas can be applied to things like Style Transfer and DeepDream to generate really cool images. So, next time, we'll talk about unsupervised learning Autoencoders, Variational Autoencoders and generative adversarial networks so that should be a fun lecture.
History_vs
What_makes_Thomas_Jefferson_so_controversial_Frank_Cogliano.txt
He was part of America's fight for freedom and equality. But were his enlightened principles outweighed by participation in a greater injustice? Find out on History versus Thomas Jefferson. Order! Order! Hey, that’s one of the guys from Mt. Rushmore. Ahem. This is Thomas Jefferson, founding father of the United States of America and primary author of the Declaration of Independence. The document that established the US as a democratic republic on the principle that everyone is created equal. If by “everyone” you mean property-owning white men. At the time Jefferson was writing, one fifth of the colonies’ population was enslaved. Surely he couldn’t be expected to single-handedly overturn the institution of slavery? Couldn’t he have just written that into the Declaration? It wasn’t that simple, Your Honor. Jefferson was one of five authors, and the document had to be ratified by the Continental Congress. He included a clause opposing the slave trade, but state delegates removed it. Nevertheless, Jefferson recognized slavery as an immoral institution and condemned it throughout his life. But Jefferson’s words never came close to matching his actions! As Virginia’s governor, he did nothing to change the state slave laws. And in his personal life, he held over 600 people in slavery. Furthermore, he believed Black people were intellectual inferiors who, if emancipated, should return to their countries of origin. Frankly, there’s no argument that Jefferson did anything significant to combat slavery. It’s true, Your Honor. But Jefferson did make important contributions to religious, financial, and gender equality. He led the charge for separating church and state, removing government funding for Virginia’s Anglican Church, and paving the way for our modern understanding of religious freedom. Jefferson also drafted laws that weakened the power of inherited wealth and pushed for the state-funded education of boys and girls. All valuable reforms, but you’re avoiding the fundamental issue here. None of this benefited enslaved people or Indigenous Americans, and it’s ridiculous to argue that Jefferson was pursuing equality when his policies frequently harmed non-white groups. Policies such as authorizing the military to exterminate Indigenous communities during the Revolutionary War. Objection! Those Northwestern tribes were allied with the British. In peacetime, Jefferson did his best to avoid conflict with Native Americans and believed they could be equal to whites. “Could be equal”? Listen to yourself! Are you defending his attempts to forcibly assimilate Indigenous communities? Jefferson’s recommendations even formed the basis for the Indian Removal Act years later. Recommendations? Why not laws? Thomas Jefferson served as a diplomat and Secretary of State before being elected as Vice President under John Adams in 1796. A role in which he undermined the President’s authority. Jefferson argued that states should have the power to overrule federal laws they deemed unconstitutional— an argument some Southern states would cite while seceding from the Union 70 years later. I think it’s a little unreasonable to lay the entire Civil War at Jefferson’s feet. Besides, his defense of states’ rights was motivated by the president’s overreaching central government. As part of Adams’ preparations for war with France, he signed legislation that tightened restrictions on immigrants and limited criticism of the government. Jefferson was just trying to protect the public. And ultimately, his efforts were so popular that he was elected as the next president. A dubious victory. He only won because states were allowed to count enslaved people towards their population without giving them voting rights. This system gave states that held people in slavery additional voting power in the Electoral College until the Civil War. Be that as it may, Jefferson was a popular president. He worked to prevent the country from taking on too much debt, and successfully led the US through the Napoleonic and the Barbary Wars. Plus, he dramatically expanded the country’s territory through the Louisiana Purchase. Where he once again failed to stop slavery from taking hold. I’ll remind you that President Jefferson signed a law forbidding the importation of enslaved people in 1807. And yet he continued to enslave those already on American soil— including his own flesh and blood. Pardon? Following his wife's death, Jefferson began a relationship with her half-sister and maid, Sally Hemings. Jefferson fathered six children with Hemings and kept the entire relationship secret, while continuing to publicly denounce the personhood of Black Americans. Jefferson freed several members of the Hemings family, including his children with Sally— While refusing to free anyone else. Despite enslaving over 600 people, Jefferson only freed 10. Five during his life and five in his will— all members of the Hemings family. Even I have to admit, this seems indefensible. It’s true, Your Honor. Despite pursuing what he believed to be equality, Jefferson failed to uphold his own ideals. Ultimately, he was a man of his time— living in an economy that relied on exploitation and enslaved labor. That’s hardly a defense when many of Jefferson’s contemporaries opposed slavery and took action to abolish it. Even if some people considered him a great man in his time, he doesn’t have to be an icon in ours. Well, I hear Mount Rushmore has a problematic past, too. Can we judge historical figures by modern standards? And what responsibilities do powerful people have to the future? Answering these questions is all part of putting history on trial.
History_vs
모의_법정_역사_대_리처드_닉슨_사건_알렉스_겐들러.txt
The presidency of the United States of America is often said to be one of the most powerful positions in the world. But of all the U.S. presidents accused of misusing that power, only one has left office as a result. Does Richard Nixon deserve to be remembered for more than the scandal that ended his presidency? Find out as we put this disgraced president's legacy on trial in History vs. Richard Nixon. "Order, order. Now, who's the defendant today, some kind of crook?" "Cough. No, your Honor. This is Richard Milhous Nixon, the 37th president of the United States, who served from 1969 to 1974." "Hold on. That's a weird number of years for a president to serve." "Well, you see, President Nixon resigned for the good of the nation and was pardoned by President Ford, who took over after him." "He resigned because he was about to be impeached, and he didn't want the full extent of his crimes exposed." "And what were these crimes?" "Your Honor, the Watergate scandal was one of the grossest abuses of presidential power in history. Nixon's men broke into the Democratic National Committee headquarters to wiretap the offices and dig up dirt on opponents for the reelection campaign." "Cough It was established that the President did not order this burglary." "But as soon as he learned of it, he did everything to cover it up, while lying about it for months." "Uh, yes, but it was for the good of the country. He did so much during his time in office and could have done so much more without a scandal jeopardizing his accomplishments." "Uh, accomplishments?" "Yes, your Honor. Did you know it was President Nixon who proposed the creation of the Environmental Protection Agency, and signed the National Environmental Policy Act into law? Not to mention the Endangered Species Act, Marine Mammal Protection Act, expansion of the Clean Air Act." "Sounds pretty progressive of him." "Progressive? Hardly. Nixon's presidential campaign courted Southern voters through fear and resentment of the civil rights movement." "Speaking of civil rights, the prosecution may be surprised to learn that he signed the Title IX amendment, banning gender-based discrimination in education, and ensured that desegregation of schools occurred peacefully, and he lowered the voting age to 18, so that students could vote." "He didn't have much concern for students after four were shot by the National Guard at Kent State. Instead, he called them bums for protesting the Vietnam War, a war he had campaigned on ending." "But he did end it." "He ended it two years after taking office. Meanwhile, his campaign had sabotaged the previous president's peace talks, urging the South Vietnamese government to hold out for supposedly better terms, which, I might add, didn't materialize. So, he protracted the war for four years, in which 20,000 more U.S. troops, and over a million more Vietnamese, died for nothing." "Hmm, a presidential candidate interfering in foreign negotiations -- isn't that treason?" "It is, your Honor, a clear violation of the Logan Act of 1799." "Uh, I think we're forgetting President Nixon's many foreign policy achievements. It was he who normalized ties with China, forging economic ties that continue today." "Are we so sure that's a good thing? And don't forget his support of the coup in Chile that replaced the democratically-elected President Allende with a brutal military dictator." "It was part of the fight against communism." "Weren't tyranny and violence the reasons we opposed communism to begin with? Or was it just fear of the lower class rising up against the rich?" "President Nixon couldn't have predicted the violence of Pinochet's regime, and being anti-communist didn't mean neglecting the poor. He proposed a guaranteed basic income for all American families, still a radical concept today. And he even pushed for comprehensive healthcare reform, just the kind that passed 40 years later." "I'm still confused about this burglary business. Was he a crook or not?" "Your Honor, President Nixon may have violated a law or two, but what was the real harm compared to all he accomplished while in office?" "The harm was to democracy itself. The whole point of the ideals Nixon claimed to promote abroad is that leaders are accountable to the people, and when they hold themselves above the law for whatever reason, those ideals are undermined." "And if you don't hold people accountable to the law, I'll be out of a job." Many politicians have compromised some principles to achieve results, but law-breaking and cover-ups threaten the very fabric the nation is built on. Those who do so may find their entire legacy tainted when history is put on trial.
History_vs
History_vs_Tamerlane_the_Conqueror_Stephanie_Honchell_Smith.txt
He was born in the 1330s in the Chaghatayid Khanate formerly the Mongol Empire in Central Asia. On the unforgiving steppe, he rose from a lowly sheep thief to become one of history’s greatest conquerors, uniting nearly all of Central Asia, Afghanistan, and Iran under his rule. But was he a great state builder or a bloodthirsty tyrant? Order! Order! Who do we have on the stand today? Tamer...lane? That wasn’t his name, your Honor. The great Timur— meaning iron— was nicknamed “Timur the lame” by enemies who mocked permanent injuries to his leg and arm. Injuries he sustained raiding a rival tribe’s sheep herd— he was a thief and a scoundrel from the start! Maybe— we actually don’t know for sure— but even if that’s true, raiding rivals was just part of nomadic life at the time. Timur was not born into a ruling family, so he had to prove his worth through daring and horsemanship. He was hardly a commoner. Timur’s family was minor nobility. His uncle and brother-in-law were high-ranking officials. And when they trusted Timur on a diplomatic mission, he defected to a rival khan! Strategic maneuvering! He reconciled with his uncle and brother-in-law soon after. Only for long enough to consolidate his own power. Then he went to battle against his brother-in-law— supposedly his closest ally. He was assassinated, and Timur seized power! They may have been friends, but he was a corrupt man who alienated a lot of people. Timur was right to oust him. Afterward, he managed to reunite most of the khanate’s territories and put an end to decades of bloody infighting. Okay, so where are we? I can hardly keep up. 1370, your honor. And he’s khan now? Well, not quite. Timur was not a direct descendant of Genghis Khan, so he couldn’t claim the title. Instead, he appointed figurehead khans and referred to himself as amir, or commander, and later as güregen, or son-in-law, after he married a woman who was descended from Genghis Khan. He claimed to be a divinely ordained protector of the Mongol and Muslim worlds, yet he undermined both Mongol and Muslim power by relentlessly waging war against his neighbors, weakening them so much that Christian Europe romanticized him as an ally. His campaigns killed as many as 17 million people! Propaganda. Timur’s official biographies deliberately exaggerated the number of deaths to deter rebellions. Like the Mongols, Timur offered cities the chance to surrender and only ordered massacres if they revolted. He rebuilt irrigation canals to support agriculture, and regularly distributed food to the poor. Just in his hometown of Kesh, he paid for the meat of 20 sheep to be given to the poor every day. His campaigns were brutal, but by unifying Central Asia, Afghanistan, and Iran, he was also able to reinvigorate the Silk Road. Much of Eurasia benefited from the revival of long-distance trade, and Central Asian cities, such as Samarkand and Herat, became thriving commercial hubs under his rule. And meanwhile, other cities like Baghdad, Aleppo, and Delhi were plundered and burned and took decades to recover. This illiterate warlord destroyed centuries’ worth of cultural heritage, leaving nothing but pyramids of skulls in his wake. Timur may have been illiterate, but he was also an active patron of culture and the arts. During his conquests, he spared artisans and scholars, sending them to work on public projects like schools and mosques. Unlike many women in the world at the time, his wives, daughters, and daughters-in-law were highly educated and politically active. Timur also personally met with— and impressed— the famed Arab historian Ibn Khaldun in Damascus, and he so thoroughly mastered chess that he is said to have enjoyed a more complex variant that was named for him. So what happened after that? Timur died from an illness in 1405, when he was likely in his early 70s. The empire he founded lasted another hundred years, ushering in an architectural, artistic, literary and scientific renaissance across Central Asia. In Samarkand, Timur’s grandson, Ulugh Beg, built the largest astronomical observatory in the world at the time. Even after the fall of Timur's empire, his descendant Babur re-established himself in India, founding the Mughal Empire, which would become home to nearly a quarter of the world’s population and which built such splendors as the Red Fort and Taj Mahal. Timur's legacy is still celebrated in monuments across Central Asia, where he is remembered as “Buyuk Babamiz” or “our great forefather.” And yet today in Europe, India, and much of the Middle East, he's remembered as a butcher. That’s more reflection of the success of his own propaganda than of the man himself. Hold on now, I think I’ve almost got the king cornered! Emerging from relative obscurity, Timur’s conquests formed a legacy lasting nearly 500 years that remains on trial even today.
History_vs
History_vs_Egypts_most_powerful_pharaoh_Jessica_Tomkins.txt
Pharaoh Ramesses II reigned for almost 70 years in the 13th century BCE. He presided over a golden age of Egyptian prosperity. But was he a model leader, or a shameless egomaniac and master of propaganda? Order! Order! Who do we have on the stand? Ramesses II? Ahem, I believe you mean "The strong bull, protector of Egypt, who subdues foreign lands; rich in years, great in victories, chosen by Ra— whose justice is powerful— Ramesses, beloved of Amun." But you may refer to him as Ramesses the Great. Ramesses, Ramesses— I think I've heard of him. "Let my people go!" Yes, Your Honor, he was the infamously stubborn pharaoh of Exodus, who forced enslaved Hebrews to build out his extravagant capital city of Pi-Ramesses. Objection, there's no archaeological evidence that Ramesses used forced labor in his construction projects. Egyptians relied on highly trained artisans and craftsmen to build their cities and monuments. And menial labor, like quarrying and moving stones, was done by military soldiers and foreign mercenaries— all of whom were compensated for their work. So he's not the pharaoh from Exodus? It's hard to say for certain. According to some timelines, Ramesses is the best candidate for that particular pharaoh. But there's no evidence of any Hebrew population in Egypt during his reign, and certainly no records of a revolution or mass migration like the one described in Exodus. Is that really so surprising? It's not like Ramesses kept records of any other time he was defeated. What do you mean? Our "great" pharaoh here operated one of the largest propaganda machines in ancient history. Almost all pharaohs relied on propaganda to control their country, and Ramesses had particularly big sandals to fill. His father, Seti I, led Egypt to a period of great wealth and stability that Ramesses worked hard to maintain. Through propaganda, yes, but also military glory. More like military aggression. By the end of Seti's reign, he had established peace with the neighboring Hittites by guaranteeing Egyptian control over a sought-after region called Kadesh. But in the fifth year of his reign, Ramesses broke those agreements. It wasn't the most peaceful decision, but Ramesses believed a military victory would aid his efforts to restore Egypt's reputation. And he was right! His victory over the Hittites cemented Ramesses' persona as a heroic pharaoh of old. Yeah, except he didn't even win! His supposed victory was actually a stalemate in which the pharaoh's arrogance almost cost Egypt the entire war. When two Hittite spies told Ramesses the enemy had fled in fear, he let his guard down, allowing his men to be ambushed. He played right into the Hittite's trap and almost lost everything. Yet the official story Ramesses had chiseled across Egypt cast himself as the battle's greatest hero. Military achievements were important for legitimizing a pharaoh's power, even if that meant a bit of exaggeration. A bit? You can't trust anything this guy says! If it wasn't for the Hittite's conflicting record, we'd still be buying Ramesses' propaganda. I propose that this court cannot judge any historical figure's legacy by the stories they tell about themselves. That seems reasonable to me. Fair enough. How's this record then— the first recorded peace treaty in archaeological history signed by Ramesses and the Hittites. "Peace treaty" is pretty generous. Ramesses begrudgingly agreed to a mutual defense contract, where Egyptians and Hittites would work together if attacked by an external enemy. And what's wrong with that? This peaceful end to the conflict marked the beginning of Ramesses' prosperous reign— a golden age of Egyptian power and wealth. True, but we have no idea if this wealth trickled down to everyday Egyptians or just financed Ramesses' vain attempts to achieve immortality through stone. He spent his entire reign pouring money into ego projects. And one of the most famous, Abu Simbel, wasn't even in Egypt! Abu Simbel was in Nubia to showcase Egypt's strength and discourage a Nubian revolution. Besides, pharaohs were expected to invest in building projects for the gods. Right, "for the gods," which Ramesses was not. Pharaohs typically occupied a status between gods and mortals, but the location of Ramesses' statue inside Abu Simbel positions him as their supposed equal. He even tore down existing temples to reuse their bricks in monuments to himself! He tore down temples built by Akhenaten, a pharaoh who'd attempted to impose monotheism. By destroying those temples, Ramesses reinforced his commitment to Egypt's traditional religion. That doesn't account for why he frequently erased other pharaohs' names on monuments and replaced them with his own. Hey, what the...? Even pharaohs who had short reigns had numerous statues made of themselves— and Ramesses ruled for almost seven decades. Well, he definitely made a lasting impression. Let's face it, Your Honor, would we even be talking about him today if he hadn't? It's often said that history is written by the winners, but in this courtroom, a winning record never guarantees the outcome.
History_vs
History_vs_Cleopatra_Alex_Gendler.txt
"Order, order. So who do we have here?" "Your Honor, this is Cleopatra, the Egyptian queen whose lurid affairs destroyed two of Rome's finest generals and brought the end of the Republic." "Your Honor, this is Cleopatra, one of the most powerful women in history whose reign brought Egypt nearly 22 years of stability and prosperity." "Uh, why don't we even know what she looked like?" "Most of the art and descriptions came long after her lifetime in the first century BCE, just like most of the things written about her." "So what do we actually know? Cleopatra VII was the last of the Ptolemaic dynasty, a Macedonian Greek family that governed Egypt after its conquest by Alexander the Great. She ruled jointly in Alexandria with her brother- to whom she was also married- until he had her exiled." "But what does all this have to do with Rome?" "Egypt had long been a Roman client state, and Cleopatra's father incurred large debts to the Republic. After being defeated by Julius Caesar in Rome's civil war, the General Pompey sought refuge in Egypt but was executed by Cleopatra's brother instead." "Caesar must have liked that." "Actually, he found the murder unseemly and demanded repayment of Egypt's debt. He could have annexed Egypt, but Cleopatra convinced him to restore her to the throne instead." "We hear she was quite convincing." "And why not? Cleopatra was a fascinating woman. She commanded armies at 21, spoke several languages, and was educated in a city with the world's finest library and some of the greatest scholars of the time." "Hmm." "She kept Caesar lounging in Egypt for months when Rome needed him." "Caesar did more than lounge. He was fascinated by Egypt's culture and knowledge, and he learned much during his time there. When he returned to Rome, he reformed the calendar, commissioned a census, made plans for a public library, and proposed many new infrastructure projects." "Yes, all very ambitious, exactly what got him assassinated." "Don't blame the Queen for Rome's strange politics. Her job was ruling Egypt, and she did it well. She stabilized the economy, managed the vast bureaucracy, and curbed corruption by priests and officials. When drought hit, she opened the granaries to the public and passed a tax amnesty, all while preserving her kingdom's stability and independence with no revolts during the rest of her reign." "So what went wrong?" "After Caesar's death, this foreign Queen couldn't stop meddling in Roman matters." "Actually, it was the Roman factions who came demanding her aid. And of course she had no choice but to support Octavian and Marc Antony in avenging Caesar, if only for the sake of their son." "And again, she provided her particular kind of support to Marc Antony." "Why does that matter? Why doesn't anyone seem to care about Caesar or Antony's countless other affairs? Why do we assume she instigated the relationships? And why are only powerful women defined by their sexuality?" "Order." "Cleopatra and Antony were a disaster. They offended the Republic with their ridiculous celebrations sitting on golden thrones and dressing up as gods until Octavian had all of Rome convinced of their megalomania." "And yet Octavian was the one who attacked Antony, annexed Egypt, and declared himself Emperor. It was the Roman's fear of a woman in power that ended their Republic, not the woman herself." "How ironic." Cleopatra's story survived mainly in the accounts of her enemies in Rome, and later writers filled the gaps with rumors and stereotypes. We may never know the full truth of her life and her reign, but we can separate fact from rumor by putting history on trial.