### At the Test

• You may make yourself some reference notes on the small card I hand out (additional cards are on my door if you need to rewrite it). The reference card must be handwritten. Think of the card as a way to include some important examples or concepts that you aren't as comfortable with. You won't have room for everything, and you should try to internalize as much as you can.
• A calculator will be allowed (but no cell phone nor other calculators bundled in combination with additional technologies) but is not required.
• You may have out food, hydration, ear plugs, or similar if they will help you (however any ear plugs must be stand alone--no cell phone, internet or other technological connections)
• There will be various types of questions on the test and your grade will be based on the quality of your responses in a timed environment.

Here is a sample partial test and solutions
so that you can see an example of the formatting and style of questions. As listed there you will see three sections that are typeset professionally (using LaTeX):

• Fill in the blank
• Computations and Interpretations / Analyses
• True/False Questions

### Topics to Study

review slides
• Brush up on any exam 1 material. Exam 2 will be a majority of new material but questions from exam 1 may appear on exam 2. Many newer concepts extend prior ones (theorem 8 in 2.3 with determinants, parameterizing homogeneous equations in 1.5 that are now the nullspace, span of vectors in 1.3 that are now the column space, span in 1.3 and l.i. in 1.7 that are combined to give a basis....).
• Linear Transformations of the plane, both 2x2 as well as 3x3 homogeneous coordinates versions. Know the following (which are on the review LaTex slide):
general rotation matrix
projections onto the y=x line, and the x and y axes
reflections across the y=x line, and the x and y axes
horizontal shear
dilation
translation
• Rotate about a point, like (4,9): Translate by (4,9).Rotate.Translate by (-4,-9)
• Composition of linear transformations: right to left (ABCx means first C(x) then B applied to that then A applied to that result, just like with function composition)
• If the composition is in the wrong order, then it won't give us the intended action, like a car flying off a track.
• Length and angle of a vector and orthogonal vectors and their use in computer graphics applications (turning a car on a track and preserving the size)
• Big picture ideas of Yoda and transpose of a matrix
• Computer speed of using associativity on (AB).Large matrix=A(B.Large matrix) including counting the number of multiplications and reasoning that (AB).Large matrix is much faster
• Algebra of determinants, including the Laplace expansion
• The connection of determinants to the square matrix theorem 8
• Geometry of determinants, including the impact of row operations
• A deeper understanding of the algebra and geometry of three topics we had previously covered:
1. replacement Gaussian reductions t (row 1) + (row 2) [parallel to row 1 through the tip of row 2. The elementary matrix is a shear (as a linear transformation) that preserves the area of parallelogram formed by the vectors in the original matrix that gets eventually turned to a rectangle via replacement row operations (or volume of parallelopiped that gets turned into a rectangular prism in higher dimensions) which is the same as the determinant!]
2. The span of the columns of a matrix A, which is the set of linear combinations, is now seen as the column space. For example, if A has 2 columns, then c(column 1) +d (column 2) is a plane through the origin if the columns are l.i. and a line through the origin otherwise. If A has 3 or more columns, we need to do some computational work (like in problem set 4) to examine the column space and find a basis (l.i. + span) by using the original pivot columns
3. The solutions to the equations for linearly independence (of the columns of a matrix A) Ax=0, which are the intersections of the rows of this homogeneous augmented matrix, is the nullspace, and we parameterize those solutions to find a basis.
• Algebra and geometry of subspaces, basis, column space and null space and the connections to previous material like linear independence and span (see above)
• Algebra and geometry of eigenvalues and eigenvectors [Ax=lambdax, (A-lambda I)x=0, matrix multiplication turns to scalar multiplication for eigenvectors, so they are vectors that are on the same line through the origin]
• Showing that solving Ax=lambdax is equivalent to solving the system (A-lambdaI)x=0 (ie the eigenvectors of A are the nullspace of (A-lambdaI). We concentrated on 2x2 matrices.
• Understanding that since we want nontrivial solutions of (A-lambdaI)x=0, we solve for lambda using determinant(A-lambdaI)=0 (since otherwise the system would have only the 0 vector trivial solution-so we want the columns of (A-lambdaI) to not be linearly independent), and being able to solve for the lambdas given a 2x2 matrix.
• 2x2 matrices which have a certain number of real eigenvalues (0,1,2) and eigenvectors (0 or infinitely many) or linearly independent eigenvectors (0,1,2).
• Maple output of the Eigenvalues and Eigenvectors command
• linear transformations of the plane and their corresponding eigenvalues and eigenvectors (projections, dilations, reflections, rotations, and shear matrices)
• Eigenvector decomposition of a 2x2 matrix A when the eigenvectors form a basis for R2 and the longterm behavior of dynamical systems, including the limit and population ratios in situations of asymptotic behavior as well as stability (lambda=1) for various initial conditions (If a2=0... otherwise we die off, grow, stabilize...) including the equations of the lines we come in along) or predation parameters, like in problem set 4.
• Filling in blanks like: If ___ equals 0 then we die off along the line____ [corresponding to the eigenvector____], and in all other cases we [choose one: die off or grow or hit and then stayed fixed] along the line____ [corresponding to the eigenvector____].
• Identifying population ratios in the longterm (from the dominant eigenvector)
• Identifying population growth or death rate in the longterm (from the largest eigenvalue - how much above or below 1 it is)
• Rough sketch of a graph of the trajectory diagram when you begin in the first quadrant and start off the eigenvectors (and on too). For example, in stability situations when one eigenvalue is 1 then, say if x_k = a1 1k Vector([2,1]) + a2 .7k Vector([-1,1]) then as long as a1 is non zero, we will stabilize to the y=1/2 x line via the populations ratio of 2:1. Graphically you should be able to draw pictures like in the Dynamical Systems demo and problem set solutions In this specific example, you can tell from the algebra that given a starting position, you will come in parallel to Vector([-1,1]) (i.e. x+y=1) until we eventually hit the stability line, where we stay forever, and that the contribution from Vector([-1,1]) is smaller and smaller with each k, which is also represented in the picture. In other situations we approach asymptotically.
• Fractions in Maple, versus errors with decimals in Maple or errors in other Maple outputs (ie the importance of critical reasoning/by-hand reasoning along with Maple).
• Maple output of eigenvectors giving one basis representative for each line, or basis representatives for R2, or Maple outputting a column of 0s as an eigenvector, like for a shear matrix, (which tells us that the eigenvectors will not form a basis for R2 - because they won't be 2 linearly independent ones).
• Some Maple Commands Here are some Maple commands you should be pretty familiar with by now for this test - i.e. I will at times show a command, and it may be with or without its output: > with(LinearAlgebra): with(plots):
> A:=Matrix([[-1,2,1,-1],[2,4,-7,-8],[4,7,-3,3]]);
> ReducedRowEchelonForm(A);
> GaussianElimination(A);
(only for augmented matrices with unknown variables like k or a, b, c in the augmented matrix)
> Transpose(A);
> ConditionNumber(A);
(only for square matrices)
> Determinant(A);
> Eigenvalues(A);
> Eigenvectors(A);
> evalf(Eigenvectors(A));
> Vector([1,2,3]);
> B:=MatrixInverse(A);
> A.B;
> A+B;
> B-A;
> 3*A;
> A^3;
> evalf(M)
> spacecurve({[4*t,7*t,3*t,t=0..1],[-1*t,2*t,6*t,t=0..1]},color=red, thickness=2);
plot vectors as line segments in R3 (columns of matrices) to show whether the columns are in the same plane, etc.
> implicitplot({2*x+4*y-2,5*x-3*y-1}, x=-1..1, y=-1..1);
> display(a,b,c);
> implicitplot3d({x+2*y+3*z-3,2*x-y-4*z-1,x+y+z-2},x=-4..4,y=-4..4,z=-4..4);
plot equations of planes in R^3 (rows of augmented matrices) to look at the geometry of the intersection of the rows (ie 3 planes intersect in a point, a line, a plane, or no common points)