# Fundamentals¶

## Basis¶

It’s a set of vectors in a vector space which are linearly independent. All vectors in the vector space are linear combinations of the basis. Read wiki.

## Eigenvector & Eigenvalue¶

Intuitively, eigenvectors and eigenvalues are related to transformation. When a transformation is applied to a vector space, veectors span. While most vectors drift away from their spans some remain on its own span – eigenvectors. Eigenvectors remain on its own span after a transformation but they may scale – by a factor of their eigenvalues. An interesting fact is that any vector that lies on the same span as eigenvectors is itself an eigenvector. Therefore **there can be infinitely many eigenvectors**. However, **there could be only one eigenvector as well**. Consider a 3D transformation matrix \(A\):

\(A\) will squash everything into *null* and there will be only one eigenvector \(\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}\)

### Eigenvector in 3D rotations¶

An eigenvectors in a 3D rotation means the **axis of a rotation**. And because a rotation doesn’t change the scale, the **eigenvalue should be 1**.

### Eigenvector in 2D rotations¶

There is no eigenvector in 2D rotations. No eigenvector implies no real-valued eigenvalue but imaginary-valued.

### Single eigenvalue with multiple eigenvectors¶

There could be multiple eigenvectors but only one single eigenvalue. Consider a transformation \(A\):

\(A\) scales every eigenvector by 2 and only 2.

### Calculation¶

The matrix \(A\) changes only the scale of the vector \(\vec{v}\) by a factor of \(\lambda\). We could rewrite the right hand as

To get the value of the eigenvalue, take the righthand to the other side:

Remember the squashification? \((A - \lambda I)\) is squashing \(\vec{v}\). This implies the following:

### Eigenbasis¶

Consider a 2D vectorspace. If both basis vectors are eigenvectors then its transformation matrix would be diagonal. Here’s step-by-step:

A typical set of eigen vectors in 2D:

A transformation matrix:

The columns of the transformation matrix happnes to be the eigenvectors and, **the basis vectors as well** with the diagonal values being their eigenvalues. Pay attention to the matrix that the matrix is diagonal. Diagonal matrices have a handy property – their power is just a power of the elements:

### Eigenbasis for easier power¶

Consider a transformation \(A\),

\(A\) is not diagonal so its power will be expensive to calculate. Let’s change its basis as the eigenvectors **in order to make** \(A\) **diagonal**. The eigenvectors of \(A\),

We can build **“Change of basis matrix”** from the eigenvectors,

Now let’s change its basis,

## Hessian matrix¶

The Hessian Matrix is a square matrix of second ordered partial derivatives of a scalar function. It is of immense use in linear algebra as well as for determining points of local maxima or minima. [1]

### Conditions for Minima,Maxima,Saddle point¶

The Hessian of a function is denoted by \(\Delta^2f(x,y)\) where \(f\) is a twice differentiable function & if \((x_0,y_0)\) is one of it’s stationary points then :

- If \(\Delta^2f(x_0,y_0)>0\) i.e positive definite , \((x_0,y_0)\) is a point of local minimum.
- If \(\Delta^2f(x_0,y_0)<0\) , i.e. negative definite , \((x_0,y_0)\) is a point of local maximum.
- If \(\Delta^2f(x_0,y_0)\) is neither positive nor negative i.e. Indefinite , \((x_0,y_0)\) is a saddle point

Reference

[1] | https://brilliant.org/wiki/hessian-matrix/ |