Linear Algebra

Linear Algebra (Elementary)

Vectors

Vectors are usually represented as column of numbers that encode a direction and magnitude.

v=(x1x2xn)

The magnitude of a vector can be determined by summing the square of each term in the column and square rooting the result.

|v|=x12+x22++xn2=n=1xn2

Dot Product

The Dot product of two vectors is

AB=|A||B|cosθ

And

AB=n=1anbn

Look at derivation of the formula here


Cross Product

The cross product of two vectors creates another vector that is perpendicular to both vectors.

A×B=|A||B|sinθ n^

Where n^ is a unit vector perpendicular to A and B. The direction of n^ can be determined using the right hand grip rule.

Pasted image 20250423155913.png|centre | 200

Your fingers curl from vector a to b, while your thumb represents the direction of a×b

Thus

A×B=B×A

The determinant trick to finding cross product.

(a1a2a3)×(b1b2b3)=(a2b3b2a3(a1b3b1a3)a1b2b1a2)
  1. Cover up the first row
  2. Multiply both sides diagonally and subtract it. (First element)
  3. Cover up second row
  4. Multiply both sides diagonally and subtract it. Multiply by -1. (Second element)
  5. Cover up third row
  6. Repeat step 2 (third element)

Area bounded by Vectors

Pasted image 20250423155837.png|centre | 300

The area of the parallelogram can be deduced. Let u be the base and h be the height. The Area would be |u|h. h can be expressed as.

h=|v|sinθ

Thus

|u|h=|u||v|sinθ=|u×v|

Thus the area of the parallelogram can be determined by the magnitude of the cross product of u and v.

Sample qn: Prove that the volume of a cuboid is.

Volume of Cuboid=|a(b×c)|

Points, Lines, Planes

Point

A point is self-evidently a point in 3d space.

Lines

Starting from what we know best

y=mx+b

A line in a vector space can be described as

r^=a^+tb^

Planes

Starting from what we know. The equation of a plane in Cartesian form is

ax+by+cz=D

Using the properties of dot product, it can be rewritten as

(xyz)(abc)=D

or

rn^=D

where n^ is the normal vector of the plane. And r is a vector that lies on the plane

Intersection of Planes

Given three planes, if their normal vectors satisfy this criteria then the plane must intersect at a unique point.

n1(n2×n3)0

However, if this is satisfied

n1(n2×n3)=0

Pasted image 20250423155647.png|centre| 300
You have to find D, often by subbing x=0, which determines whether it has infinite solutions or form a triangle (no solutions) shown above

Matrices

Recall how a function in algebra transforms x to y?

f(x)=y

In linear algebra, instead of x and y's which are scalar values, linear algebra transforms vectors into another vector.

f(x)=y

However, to make the distinction clearer, mathematicians use A instead of f.

Ax=b

We need special tools to work with vectors, and this is where matrices comes in. A matrix encodes a transformation on a vector, similar to how a function (f) encodes a transformation of x to y.

eg.

(1234)x^=y^

where A is a matrix and A=(1234)

(1234)(x1x2)=(y1y2)

Note that the dim of a vector has to be the same as the number of columns of matrix (A).


Matrix Multiplication (How does it transform between x^ to y^?)

Matrix multiplication

  1. Take the nth row of numbers in A
  2. Pivot it clockwise by 90 degrees into the mth column in x^
  3. Multiply it respectively with the variable associated with it
  4. Add up all the numbers in the column
  5. The value would be on the nth row and mth column in y^
(abcd)(xy)=(ax+bycx+dy)

Thus

(1234)(x1x2)=(x1+2x23x1+4x2)

Bravo!! We have transformed x^ into a completely new vector y^ !!


System of Linear Equations

A matrix also encodes a system of linear equations. Suppose there are 2 equations as shown, and we want to solve for a and b.

{a+b=202a+b=23

A matrix representation of the above is as given

(1121)(ab)=(a+b2a+b)=(2023)

look, aren't they equivalent??

Solving for the coefficients are as easy as

(ab)=(1121)1(2023)

Where the inverse of the matrix is essentially the reverse of its transformation, similar to how f:xy , thus f1:yx . Note, the inverse does not exist if the transformation is not one to one. But for square matrices, there is always an inverse 99% of the time.


Transpose

Transpose means to flip a vector or a matrix around a diagonal #Transpose

(x0x1x2)T=(x0x1x2)

Thus

vTv=vv

In the case of matrices

(a00a01a02a10a11a12)T=(a00a10a01a11a02a12)

Another property is that ATA always result in a square matrix. As one matrix transforms a vector m×1 to n×1, and the other transforms it from n×1 to m×1. Together, the two transformation transforms a vector from m×1 to m×1. Which is a property of a square matrix.


Determinant

ps. This was copied from Reddit

The idea of the determinant is to get an indication of whether a system of equations has exactly one solution, or not. If the determinant is zero, then the system has no solutions, or many solutions. Otherwise, it has exactly one.

The reason it works out this way is that the determinant formula is set up to give zero if the columns are not linearly independent. If the columns are not linearly independent, that means we don't have enough information to find a unique solution.

Linear dependence between columns means that at least one of the columns is a linear combination of the others. This is a formal way of saying that one of the columns doesn't provide any new information because it's derived by adding together other columns (possibly after multiplying them by constants first.)

PS. As others have said, there's a geometric interpretation too, where transforming a system using a matrix with linearly dependent columns/rows will cause at least one dimension of the space to collapse. The geometric interpretation has the advantage that it attaches some meaning to the size of the determinant when it's not zero: it's the ratio of the the change in area (or volume) caused by a transformation. But I still think the system-of-equations interpretation is the clearest, personally.

Linear Algebra (Modern)

Infinite Vectors and Functional Spaces

"The only thing limiting us in Math is our imagination" ~ Joshua

Functions are vectors. What? Yes it's true. All functions are an infinite vector and these infinite vectors live within something called a Functional Space.

Introduction to Infinite-dimensional vectors functions

Take a function f(x). Suppose we get all values of f(x) from 0xn . And formulate them in a vector.

f(x)=(f(0)f(0.001)f(n))

Orthogonal Basis

Suppose f(x)=sinx. Since sin is an oscillatory function, the domain of x that we are interested in is thus x{R|0x2π}

Recall that in Euclidean space, a linear combination of two orthogonal (perpendicular) vectors, i^ and j^ can describe any point on the plane? Similarly, to describe any point/functions within this Functional space, we want to find orthogonal vectors or functions.

Thus we need to find two functions where its dot product is 0.

f^(x)g^(x)=0

When dealing with finite vectors, the dot product is defined as such, where each element in the vector is multiplied and summed together.

A^B^=n=1anbn

Similarly, in the context of infinite vectors, it is defined as

f^(x)g^(x)=f(x) g(x) dx

Fourier discovered that the two functions f(x) and g(x) are sinx and cosx respectively. As

02πsinxcosx dx=0

Thus, sin and cos act as orthogonal basis vectors in Functional space. Proving that every single function can be expressed as a linear combination of sin and cos waves.