Details
Nothing to say, yet
Big christmas sale
Premium Access 35% OFF
Nothing to say, yet
The main ideas from this information are: - Properties of determinants, such as the determinant of a non-zero skew-symmetric matrix of even order is a perfect square, and the determinant of an orthogonal matrix is plus or minus 1. - The reverse of a matrix can be found by going in the anticlockwise direction from the next matrix. - The rank of a matrix represents the number of independent vectors. - Eigenvectors are vectors that do not move after a transformation, and eigenvalues are the values by which they scale or descale. - Transformation matrices describe where basis vectors land after a transformation. - The echelon form of a matrix can be obtained by performing row operations to make zeros in front of each next row. - Rank can be used to determine the number of independent vectors, while nullity represents the number of independent variables. - Linearly dependent vectors can be created from other vectors, while linearly independent vectors cannot. - Orthogonal vectors have a dot good afternoon, good evening whatsoever. Now I start with the mathematics. So properties of determinants, then skew's metric matrix of odd order is always 0, whereas determinant of non-zero skew's metric matrix of even order. If the skew's metric matrix is of even order then it will be a perfect square, otherwise it will be 0, simple. Then determinant value of orthogonal matrix is plus minus 1, okay. What it will be, that is different. Then adjoint A is equal to determinant A, adjoint A is equal to determinant, okay, that is okay. If I want to find the reverse, I need to go in the anticlockwise direction from the next matrix. So if it is A, D, A, B, E, C, then A inverse will be B, E, C, D, simple. Properties of rank of a matrix, so what is a matrix? Matrix is the highest order non-zero minor. What is a minor? Minor is simply, you take any piece, you leave it as a row and column and the rest of the matrix is minor matrix. Rank of null matrix is 0, yes we know. Non-singular matrix is its order, of course, and what is a non-singular matrix, whose determinant is non-zero, okay. Rank of A plus B will be less than equal to rank of A plus rank of B, whereas rank of A into B will be less than equal to minimum of rank of A or rank of B, rank of A and rank of B, right, okay. In a matrix, if all rows or columns are identical, then the rank of that matrix will be 1, why? Because we can take all of them out, because all are, so first, first, first, after this I will tell this eigenvector and all that thing which I learned from three blue, one brown, right, the brown is the teacher, okay. Now, now this thing, if rank of A is n, then rank of adjoint A will be n, if rank of A is n minus 1, then rank of adjoint A will be 1, if rank of A is n minus 1, then the rank of this adjoint A will be 0, simple. Now, now coming to that, I will revise how we represent matrices in all, so this I cap and J cap, two vectors are there, what is the vector? Vector consists of two values like x and y, so this, this x and, if we are talking about two variables, then x and y will be there, otherwise x, y, z will be there, so each vector needs to be represented by those many variables. Now, what is eigenvalue? Eigenvalue is the value at which, what is that? Eigenvector, first let us talk about eigenvector, eigenvector is the vector that does not move after the transformation, it remains at the same position, so that is eigenvector and if I talk about eigenvalue, so eigenvalue is that value by which it will scale or descale, but it will remain in the same direction, it will remain on the same line, so that is, same line I am talking because this is, if it is in 1D, otherwise if it will be 2D, then it will be a plane, so simple, so that is eigenvector and eigenvector and eigenvalue, so eigenvalue is just how much time it is scaling and eigenvector itself is the vector which does not change its position after the transformation. What is the transformation matrix? Transformation matrix is any matrix which just states that our basis vector, where our basis vector will land after the transformation, that is the transformation matrix, so if it was 1, 0 and 0, 1 and it lands at let us say 3, 0 and 3, 4, so that will, so the transformation matrix will be 3, 0 and 3, 4, like that, now which vector did not change, here basis vector itself is an eigenvector, so that is a very good thing, now let us talk about more, let us talk more about this thing, if a diagonal matrix is there, then the columns are its eigenvectors, simple, now what more is there, from 3 blue, 1 brown, this much is the information and then all I have revised, like how, what is a matrix, what is basis vector, what is a transformation matrix and how transform, how when we multiply, transform, transform matrix to our vector, it will transform it into the other frame, frame of reference, right, so that is what a transform matrix is, 10, now coming back to this, so what is an echelon form, echelon form is we have to perform only row operations and we have to make, try to make 0 in front and each next row, it should have more number of 0s before non-zero element and then it will be called echelon form of matrix and rank, we can tell the rank from this that if all those non-zero rows are called, number of non-zero rows is called rank of the matrix and what does the rank represent, rank represents the number of eigenvectors, independent vectors, what is the independent vector, now from here I can, if I remember from the 3 blue, 1 brown that when this concept was independent vectors and all, if I remember that if we take 3 vectors, if we take, okay, and the third vector does not take us to the new dimension, then it is worthless because we can create this vector from the other 2 vectors, right, if you have 3 vectors and the one vector does not take us or let us say 2 vectors we have and if both are on the same line, we scale the other one and add or do something with this one and this does not take us to the next dimension which is 2D, then it will be again, so one independent vector will be there and the second which we see is dependent vector because this we can create from the other 2 vectors, so like that, now if we talk about rank, so what rank represents is those independent vectors, independent vectors means those which can, which if we are given only those we can create the whole, what do we say, this space, we can create the whole space just by them, right, span, that we call span, span we can, the whole span we can create just by those 2 independent vectors and what nullity tells us is those independent variables, now what are those independent variables, variables are like, what is an independent variable, independent variable that does not depend on others, like that, this concept I had just, let me revise, let me revise what it was, independent variable, independent vectors we have seen because those are independent, independent variable will be, because nullity and rank, like nullity is equal to total number of variables minus rank or total number of columns minus rank, so how do I define it, like because x and y are there and 2 vectors are there, so I cannot just say that I don't get it, x and y is, so it is total number of columns minus rank, I have to say this independent variable again, now if I come to linearly dependent and linearly independent, why do we call linear is because after the transformation the centre should be, centre should remain where it was and the origin of, and the line should be parallel always, each and every line should be parallel, it should not be curved, so that is linear, now linearly dependent is, right now what I just talked which we can create from the other vectors and linearly independent is which we cannot create from the basis vector or something like the, from the other vectors, so that is linearly independent, now it says that if 2 vectors are dependent, then their rank will be less than n, n is the size of the matrix and orthogonal matrix are like, if we do transpose of one vector and multiply it with the other vector, then we should, if we get 1, sorry if we get 0, then those vectors are orthogonal vectors, simple, now basis of vector, we have already seen that those vectors from which we can create the whole span, the whole space, now properties of eigenvalues and eigenvectors, so the roots of characteristic equations are known as eigenvalues, eigenvector corresponds to distinct eigenvalues of a real symmetric matrix are always orthogonal, so distinct eigenvalues are there and real symmetric matrix is there, then they will be orthogonal, so the basis basically distinct plus symmetric is equal to orthogonal, that will give us orthogonal vectors, then if all the leading binaries of a real symmetric matrix are positive, then all its eigenvalues are positive, the eigenvector corresponding to distinct eigenvalues of any square matrix are always linearly independent, so when distinct eigenvalues are there, then the vectors will be linearly independent, okay, eigenvectors corresponding to repeated eigenvalues may or may not be independent, right, okay, algebraic multiplicity is number of times a eigenvalue is repeated, that is algebraic multiplicity and what is geometric multiplicity is that number of linearly independent eigenvectors correspond to repeated eigenvalue, that is geometric multiplicity, now we come to sum of eigenvalues will be trace, product of eigenvalues will be indeterminate, eigenvalues of A and A transpose are same and then we can say that wherever A is there, place it with lambda which is an eigenvalue then and we can create an equation with that, okay, lambda is the eigenvalue, A inverse it will be 1 upon lambda, adjoint it will be determinant divided by lambda and then if I see A square, lambda square, that is all okay, if when a matrix is in form of B, 0, 0, C, then we should find only eigenvalues from B and C and those will be the eigenvalues of the whole matrix, if it is like B, C, 0, D, then also we need to find B and D only, means one of the diagonal matrix in the big matrix should be 0, then we can find it from the diagonal matrix like B and C or B and D, okay, so that was all and now transquate of matrix, what is a transquate, transquate is transjugate, transpose of conjugate of matrix is known as transjugate of a matrix, so what is a Hermitian matrix, A theta is equal to A, okay, A theta is A transpose only, excuse me, Hermitian is A theta is equal to minus A and unitary matrix A, A theta is equal to I, okay, eigenvalues of Hermitian or symmetric matrix are always real, okay, Hermitian comes when this Hermitian or transjugate what we are talking is that and that is what we call this complex numbers, when complex number comes then it is the case and eigenvalues of Hermitian or symmetric matrix are always 0 or purely imaginary, that we need to remember, eigenvalues of unitary matrix or orthogonal matrix have absolute value that is plus 1 or minus 1, excuse me, Hermitian, so how do we find this Hermitian matrix that is conjugate transpose of conjugate, okay, that is transjugate, A theta, transpose is there, but first what we have to do is transjugate, so we have to conjugate and then transpose that is transjugate, okay, Cayley-Ambilger theorem we all know, A minus lambda I and we solve for A minus lambda I, wherever A is there we replace it with lambda, simple, that is all for matrix, now I will see you next time, see you, thank you, bye, bye.