Linear Algebra
Linear Algebra (Elementary)
Vectors
Vectors are usually represented as column of numbers that encode a direction and magnitude.
The magnitude of a vector can be determined by summing the square of each term in the column and square rooting the result.
Dot Product
The Dot product of two vectors is
And
Look at derivation of the formula here
Cross Product
The cross product of two vectors creates another vector that is perpendicular to both vectors.
Where
Your fingers curl from vector a to b, while your thumb represents the direction of
Thus
The determinant trick to finding cross product.
- Cover up the first row
- Multiply both sides diagonally and subtract it. (First element)
- Cover up second row
- Multiply both sides diagonally and subtract it. Multiply by -1. (Second element)
- Cover up third row
- Repeat step 2 (third element)
Area bounded by Vectors
The area of the parallelogram can be deduced. Let
Thus
Thus the area of the parallelogram can be determined by the magnitude of the cross product of
Sample qn: Prove that the volume of a cuboid is.
Points, Lines, Planes
Point
A point is self-evidently a point in 3d space.
Lines
Starting from what we know best
A line in a vector space can be described as
Planes
Starting from what we know. The equation of a plane in Cartesian form is
Using the properties of dot product, it can be rewritten as
or
where
Intersection of Planes
Given three planes, if their normal vectors satisfy this criteria then the plane must intersect at a unique point.
However, if this is satisfied
You have to find
Matrices
Recall how a function in algebra transforms x to y?
In linear algebra, instead of x and y's which are scalar values, linear algebra transforms vectors into another vector.
However, to make the distinction clearer, mathematicians use A instead of f.
We need special tools to work with vectors, and this is where matrices comes in. A matrix encodes a transformation on a vector, similar to how a function (f) encodes a transformation of x to y.
eg.
where A is a matrix and
Note that the dim of a vector has to be the same as the number of columns of matrix (A).
Matrix Multiplication (How does it transform between to ?)
Matrix multiplication
- Take the nth row of numbers in A
- Pivot it clockwise by 90 degrees into the mth column in
- Multiply it respectively with the variable associated with it
- Add up all the numbers in the column
- The value would be on the nth row and mth column in
Thus
Bravo!! We have transformed
System of Linear Equations
A matrix also encodes a system of linear equations. Suppose there are 2 equations as shown, and we want to solve for a and b.
A matrix representation of the above is as given
look, aren't they equivalent??
Solving for the coefficients are as easy as
Where the inverse of the matrix is essentially the reverse of its transformation, similar to how
Transpose
Transpose means to flip a vector or a matrix around a diagonal #Transpose
Thus
In the case of matrices
Another property is that
Determinant
ps. This was copied from Reddit
The idea of the determinant is to get an indication of whether a system of equations has exactly one solution, or not. If the determinant is zero, then the system has no solutions, or many solutions. Otherwise, it has exactly one.
The reason it works out this way is that the determinant formula is set up to give zero if the columns are not linearly independent. If the columns are not linearly independent, that means we don't have enough information to find a unique solution.
Linear dependence between columns means that at least one of the columns is a linear combination of the others. This is a formal way of saying that one of the columns doesn't provide any new information because it's derived by adding together other columns (possibly after multiplying them by constants first.)
PS. As others have said, there's a geometric interpretation too, where transforming a system using a matrix with linearly dependent columns/rows will cause at least one dimension of the space to collapse. The geometric interpretation has the advantage that it attaches some meaning to the size of the determinant when it's not zero: it's the ratio of the the change in area (or volume) caused by a transformation. But I still think the system-of-equations interpretation is the clearest, personally.
Linear Algebra (Modern)
Infinite Vectors and Functional Spaces
"The only thing limiting us in Math is our imagination" ~ Joshua
Functions are vectors. What? Yes it's true. All functions are an infinite vector and these infinite vectors live within something called a Functional Space.
Introduction to Infinite-dimensional vectors functions
Take a function
Orthogonal Basis
Suppose
Recall that in Euclidean space, a linear combination of two orthogonal (perpendicular) vectors,
Thus we need to find two functions where its dot product is 0.
When dealing with finite vectors, the dot product is defined as such, where each element in the vector is multiplied and summed together.
Similarly, in the context of infinite vectors, it is defined as
Fourier discovered that the two functions
Thus,