Addition of two polynomials.
p = addpol(p1,p2)
addpol(p1,p2) adds two polynomials p1 and p2.
Each polynomial is given as a vector
of coefficients, with the highest power first; e.g.,
addpol([1,2,3], [2,5]) 1 4 8 addpol([1,2,3], -[2,5]) % subtraction 1 0 -2 addpol([1,2,3;4,5,6], [1;1]) 1 2 4 4 5 7
Diagonal similarity transform for balancing a matrix.
B = balance(X) (T, B) = balance(X)
balance(X) applies a diagonal similarity transform to the square matrix X to make the rows and columns as close in norm as possible. Balancing may reduce the 1-norm of the matrix, and improves the accuracy of the computed eigenvalues and/or eigenvectors. To avoid round-off errors, balance scales X with powers of 2.
balance returns the balanced matrix B which has the same eigenvalues and singular values as X, and optionally the diagonal scaling matrix T such that T\X*T=B.
A = [1,2e6;3e-6,4]; (T,B) = balance(A) T = 16384 0 0 3.125e-2 B = 1 3.8147 1.5729 4
Cholesky decomposition.
M2 = chol(M1)
If a square matrix M1 is symmetric (or hermitian) and positive definite, it can be decomposed into the following product:
where M2 is an upper triangular matrix. The Cholesky decomposition can be seen as a kind of square root.
The part of M1 below the main diagonal is not used, because M1 is assumed to be symmetric or hermitian. An error occurs if M1 is not positive definite.
M = chol([5,3;3,8]) M = 2.2361 1.3416 0 2.4900 M'*M 5 3 3 8
Condition number of a matrix.
x = cond(M)
cond(M) returns the condition number of matrix M, i.e. the ratio of its largest singular value divided by the smallest one. The larger the condition number, the more ill-conditioned the inversion of the matrix.
cond([1, 0; 0, 1]) 1 cond([1, 1; 1, 1+1e-3]) 4002.0008
Convolution or polynomial multiplication.
v = conv(v1,v2) M = conv(M1,M2) M = conv(M1,M2,dim)
conv(v1,v2) convolves the vectors v1 and v2, giving a vector whose length is length(v1)+length(v2)-1. The result is a row vector if both arguments are row vectors, and a column vector if both arguments are column vectors. Otherwise, arguments are considered as matrices.
conv(M1,M2) convolves the matrices M1 and M2 column by columns. conv(M1,M2,dim) convolves along the dimension dim, 1 for columns and 2 for rows. If one of the matrices has only one column, or one row, it is repeated to match the size of the other argument.
conv([1,2],[1,2,3]) 1 4 7 6 conv([1,2],[1,2,3;4,5,6],2) 1 4 7 6 4 13 16 12
Two-dimensions convolution of matrices.
M = conv2(M1,M2) M = conv2(M1,M2,kind)
conv2(M1,M2) convolves the matrices M1 and M2 along both directions. The optional third argument specifies how to crop the result. Let (nl1,nc1)=size(M1) and (nl2,nc2)=size(M2). With kind='full' (default value), the result M has nl1+nl2-1 lines and nc1+nc2-1 columns. With kind='same', the result M has nl1 lines and nc1 columns; this options is very useful if M1 represents equidistant samples in a plane (e.g. pixels) to be filtered with the finite-impulse response 2-d filter M2. With kind='valid', the result M has nl1-nl2+1 lines and nc1-nc2+1 columns, or is the empty matrix []; if M1 represents data filtered by M2, the borders where the convolution sum is not totally included in M1 are removed.
conv2([1,2,3;4,5,6;7,8,9],[1,1,1;1,1,1;1,1,1]) 1 3 6 5 3 5 12 21 16 9 12 27 45 33 18 11 24 39 28 15 7 15 24 17 9 conv2([1,2,3;4,5,6;7,8,9],[1,1,1;1,1,1;1,1,1],'same') 12 21 16 27 45 33 24 39 28 conv2([1,2,3;4,5,6;7,8,9],[1,1,1;1,1,1;1,1,1],'valid') 45
Covariance.
M = cov(data) M = cov(data, 0) M = cov(data, 1)
cov(data) returns the best unbiased estimate m-by-m covariance matrix of the n-by-m matrix data for a normal distribution. Each line of data is an observation where n quantities were measured. The covariance matrix is real and symmetric, even if data is complex. The diagonal is the variance of each column of data. cov(data,0) is the same as cov(data).
cov(data,1) returns the m-by-m covariance matrix of the n-by-m matrix data which contains the whole population.
cov([1,2;2,4;3,5]) 1 1.5 1.5 2.3333
Cumulative products.
M2 = cumprod(M1) M2 = cumprod(M1,dim)
cumprod(M1) returns a matrix M2 of the same size as M1, whose elements M2(i,j) are the product of all the elements M1(k,j) with k<=i. cumprod(M1,dim) operates along the dimension dim (column-wise if dim is 1, row-wise if dim is 2).
cumprod([1,2,3;4,5,6]) 1 2 3 4 10 18 cumprod([1,2,3;4,5,6],2) 1 2 6 4 20 120
Cumulative sums.
M2 = cumsum(M1) M2 = cumsum(M1,dim)
cumsum(M1) returns a matrix M2 of the same size as M1, whose elements M2(i,j) are the sum of all the elements M1(k,j) with k<=i. cumsum(M1,dim) operates along the dimension dim (column-wise if dim is 1, row-wise if dim is 2).
cumsum([1,2,3;4,5,6]) 1 2 3 5 7 9 cumsum([1,2,3;4,5,6],2) 1 3 6 4 9 15
Deconvolution or polynomial division.
q = deconv(a,b) (q,r) = deconv(a,b)
(q,r)=deconv(a,b) divides the polynomial a by the polynomial b, resulting in the quotient q and the remainder r. All polynomials are given as vectors of coefficients, highest power first. The degree of the remainder is strictly smaller than the degree of b. deconv is the inverse of conv: a = addpol(conv(b,q),r).
[q,r] = deconv([1,2,3,4,5],[1,3,2]) q = 1 -1 4 r = -6 -3 addpol(conv(q,[1,3,2]),r) 1 2 3 4 5
Determinant of a square matrix.
d = det(M)
det(M) is the determinant of the square matrix M, which is 0 (up to the rounding errors) if M is singular. The function rank is a numerically more robust test for singularity.
det([1,2;3,4]) -2 det([1,2;1,2]) 0
Differences.
dm = diff(M) dm = diff(M,n) dm = diff(M,n,dim) dm = diff(M,[],dim)
diff(M) calculates the differences between each elements of the columns of matrix M, or between each elements of M if it is a row vector.
diff(M,n) calculates the n:th order differences, i.e. it repeats n times the same operation. Up to a scalar factor, the result is an approximation of the n:th order derivative based on equidistant samples.
diff(M,n,dim) operates along dimension dim. If the second argument n is the empty matrix [], the default value of 1 is assumed.
diff([1,3,5,4,8]) 2 2 -1 4 diff([1,3,5,4,8],2) 0 -3 5 diff([1,3,5;4,8,2;3,9,8],1,2) 2 2 4 -6 6 -1
Eigenvalues and eigenvectors of a matrix.
e = eig(M) (V,D) = eig(M)
eig(M) returns the vector of eigenvalues of the square matrix M.
(V,D) = eig(M) returns a diagonal matrix D of eigenvalues and a matrix V whose columns are the corresponding eigenvectors. They are such that M*V = V*D.
eig([1,2;3,4]) -0.3723 5.3723 (V,D) = eig([1,2;2,1]) V = 0.7071 0.7071 -0.7071 0.7071 D = -1 0 0 3 [1,2;2,1] * V -0.7071 2.1213 0.7071 2.1213 V * D -0.7071 2.1213 0.7071 2.1213
The input argument must be real, and symmetric with two output arguments.
Exponential of a square matrix.
M2 = expm(M1)
expm(M) is the exponential of the square matrix M, which is usually different from the element-wise exponential of M given by exp.
expm([1,1;1,1]) 4.1945 3.1945 3.1945 4.1945 exp([1,1;1,1]) 2.7183 2.7183 2.7183 2.7183
Fast Fourier Transform.
F = fft(f) F = fft(f,n) F = fft(f,n,dim)
fft(f) returns the discrete Fourier transform (DFT) of the vector f, or the DFT's of each columns of the matrix f. With a second argument n, the n first values are used; if n is larger than the length of the data, zeros are added for padding. An optional argument dim gives the dimension along which the DFT is performed; it is 1 for calculating the DFT of the columns of f, or 2 for its rows. fft(f,[],dim is accepted if n is not required.
fft is based on a radix-2 Fast Fourier Transform if the length of the data is a power of 2. Otherwise, a plain DFT is performed, which is much slower for large matrices.
The coefficients of the DFT are given from the zero frequency to the largest frequency (one point less than the inverse of the sample time). If the input f is real, its DFT has symmetries, and the first half contain all the relevant information.
fft(1:4) 10 -2+2j -2 -2-2j fft(1:4, 3) 6 -1.5+0.866j -1.5-0.866j
Digital filtering of data.
y = filter(b,a,u) y = filter(b,a,u,x) y = filter(b,a,u,x,dim)
The vector u is filtered with the digital filter whose coefficients are given by polynomials b and a. The filtered data can also be a matrix; each column (or each line if a fifth argument is provided and is 2) is filtered separately. The fourth argument, if provided and different than the empty matrix [], is the initial state of the filter and has max(length(a),length(b))-1 elements; it is either a vector if the state is the same for all signals, or a matrix whose columns are the initial states used for each signal. The result y, which has the same size as the input, could be computed with the following code if u is a vector:
a = a / a(1); if length(a) > length(b) b = [b, zeros(1, length(a)-length(b))]; else a = [a, zeros(1, length(b)-length(a))]; end for i = 1:length(u) y(i) = b(1) * u(i) + x(1); for j = 1:length(x)-1 x(j) = b(j + 1) * u(i) + x(j + 1) - a(j + 1) * y(i); end x(length(x)) = b(length(x) + 1) * u(i) - a(length(x) + 1) * y(i); end
filter([1,2], [1,2,3], ones(1,10)) 1 1 -2 4 1 -11 22 -8 -47 121 u = [5,6,5,6,5,6,5]; p = 0.8; filter(1-p, [1,-p], u, p*u(1)) % low-pass with matching initial state 5 5.2 5.16 5.328 5.2624 5.4099 5.3279
Inverse Fast Fourier Transform.
f = ifft(F) f = ifft(F,n) f = ifft(F,n,dim)
ifft returns the inverse Discrete Fourier Transform (inverse DFT). Up to the sign and a scaling factor, the inverse DFT and the DFT are the same operation: for a vector, ifft(d) = conj(fft(d))/length(d). ifft has the same syntax as fft.
F = fft([1,2,3,4]) F = 10 -2+2j -2 -2-2j ifft(F) 1 2 3 4
Inverse of a square matrix.
M2 = inv(M1)
inv(M1) returns the inverse M2 of the square matrix M1, i.e. a matrix of the same size such that M2*M1 = M1*M2 = eye(size(M1)). M1 must not be singular; otherwise, its elements are infinite.
To solve a set of linear of equations, the operator \ is more efficient.
inv([1,2;3,4]) -2 1 1.5 -0.5
operator /, operator \, pinv, rank, eye
Kronecker product.
M = kron(A, B)
kron(A,B) returns the Kronecker product of matrices A (size m1 by n1) and B (size m2 by n2), i.e. an m1*m2-by-n1*n2 matrix made of m1 by n1 submatrices which are the products of each element of A with B.
kron([1,2;3,4],ones(2)) 1 1 2 2 1 1 2 2 3 3 4 4 3 3 4 4
Maximum value of a vector or of two arguments.
x = max(v) (v,ind) = max(v) v = max(M,[],dim) (v,ind) = max(M,[],dim) M3 = max(M1,M2)
max(v) returns the largest number of vector v. NaN's are ignored. The optional second output argument is the index of the maximum in v; if several elements have the same maximum value, only the first one is obtained.
max(M) operates on the columns of the matrix M and returns a row vector. max(M,[],dim) operates along dimension dim (1 for columns, 2 for rows).
max(M1,M2) returns a matrix whose elements are the maximum between the corresponding elements of the matrices M1 and M2. M1 and M2 must have the same size, or be a scalar which can be compared against any matrix.
(mx,ix) = max([1,3,2,5,8,7]) mx = 8 ix = 5 max([1,3;5,nan], [], 2) 3 5 max([1,3;5,nan], 2) 2 3 5 2
Arithmetic mean of a vector.
x = mean(v) v = mean(M) v = mean(M,dim)
mean(v) returns the arithmetic mean of the elements of vector v. mean(M) returns a row vector whose elements are the means of the corresponding columns of matrix M. mean(M,dim) returns the mean of matrix M along dimension dim; the result is a row vector if dim is 1, or a column vector if dim is 2.
mean(1:5) 7.5 mean((1:5)') 7.5 mean([1,2,3;5,6,7]) 3 4 5 mean([1,2,3;5,6,7],1) 3 4 5 mean([1,2,3;5,6,7],2) 2 6
Minimum value of a vector or of two arguments.
x = min(v) (v,ind) = min(v) v = min(M,[],dim) (v,ind) = min(M,[],dim) M3 = min(M1,M2)
min(v) returns the largest number of vector v. NaN's are ignored. The optional second smallest argument is the index of the minimum in v; if several elements have the same minimum value, only the first one is obtained.
min(M) operates on the columns of the matrix M and returns a row vector. min(M,[],dim) operates along dimension dim (1 for columns, 2 for rows).
min(M1,M2) returns a matrix whose elements are the minimum between the corresponding elements of the matrices M1 and M2. M1 and M2 must have the same size, or be a scalar which can be compared against any matrix.
(mx,ix) = min([1,3,2,5,8,7]) mx = 1 ix = 1 min([1,3;5,nan], [], 2) 1 5 min([1,3;5,nan], 2) 1 2 2 2
Norm of a vector or matrix.
x = norm(v) x = norm(v,kind) x = norm(M) x = norm(M,kind)
With one argument, norm calculates the 2-norm of a vector or the induced 2-norm of a matrix. The optional second argument specifies the kind of norm.
Kind | Vector | Matrix |
---|---|---|
none or 2 | sqrt(sum(abs(v).^2)) | largest singular value |
(induced 2-norm) | ||
1 | sum(abs(V)) | largest column sum of abs |
inf or 'inf' | max(abs(v)) | largest row sum of abs |
-inf | min(abs(v)) | largest row sum of abs |
p | sum(abs(V).^p)^(1/p) | invalid |
'fro' | sqrt(sum(abs(v).^2)) | sqrt(sum(diag(M'*M))) |
norm([3,4]) 5 norm([2,5;9,3]) 10.2194 norm([2,5;9,3],1) 11
Pseudo-inverse of a matrix.
M2 = pinv(M1) M2 = pinv(M1,e)
pinv(M1) returns the pseudo-inverse of matrix M. For a nonsingular square matrix, the pseudo-inverse is the same as the inverse. For an arbitrary matrix (possibly nonsquare), the pseudo-inverse M2 has the following properties: size(M2) = size(M1'), M1*M2*M1 = M1, M2*M1*M2 = M2, and the norm of M2 is minimum. To pseudo-inverse is based on the singular-value decomposition, where only the singular values larger than some small threshold are considered. This threshold can be specified with an optional second argument.
If M1 is a full-rank matrix with more rows than columns, pinv returns the least-square solution pinv(M1)\y = (M1'*M1)\M1'*y of the over-determined system M1*x = y.
pinv([1,2;3,4]) -2 1 1.5 -0.5 M2 = pinv([1;2]) M2 = 0.2 0.4 [1;2] * M2 * [1;2] 1 2 M2 * [1;2] * M2 0.2 0.4
pinv supports only real matrices.
Characteristic polynomial of a square matrix or polynomial coefficients based on its roots.
pol = poly(M) pol = poly(r)
With a matrix argument, poly(M) returns the characteristic polynomial det(x*eye(size(M))-M) of the square matrix M. The roots of the characteristic polynomial are the eigenvalues of M.
With a vector argument, poly(r) returns the polynomial whose roots are the elements of the vector r. The first coefficient of the polynomial is 1.
poly([1,2;3,4] 1 -5 -2 roots(poly([1,2;3,4])) 5.3723 -0.3723 eig([1,2;3,4]) -0.3723 5.3723 poly(1:3) 1 -6 11 -6
Derivative of a polynomial.
pol2 = polyder(pol1)
polyder(pol1) returns the polynomial which is the derivative of the polynomial pol1. Both polynomials are given as vectors of their coefficients, highest power first. The result is a row vector.
polyder([1, 2, 3, 4, 5]) 4 6 6 4
Numerical value of a polynomial evaluated at some point.
y = polyval(pol, x)
polyval(pol,x) evaluates the polynomial pol at x, which can be a scalar or a matrix of arbitrary size. The result has the same size as x.
polyval([1,3,8], 2) 18 polyval([1,2], 1:5) 3 4 5 6 7
Product of the elements of a vector.
x = prod(v) v = prod(M) v = prod(M,dim)
prod(v) returns the product of the elements of vector v. prod(M) returns a row vector whose elements are the products of the corresponding columns of matrix M. prod(M,dim) returns the product of matrix M along dimension dim; the result is a row vector if dim is 1, or a column vector if dim is 2.
prod(1:5) 120 prod((1:5)') 120 prod([1,2,3;5,6,7]) 5 12 21 prod([1,2,3;5,6,7],1) 5 12 21 prod([1,2,3;5,6,7],2) 6 210
Rank of a matrix.
x = rank(M) x = rank(M,e)
rank(M) returns the rank of matrix M, i.e. the number of lines or columns linearly independent. To obtain it, the singular values are computed and the number of values significantly larger than 0 is counted. The value below which they are considered to be 0 can be specified with the optional second argument.
rank([1,1;0,0]) 1 rank([1,1;0,1j]) 2
Roots of a polynomial.
r = roots(pol) r = roots(M) r = roots(M,dim)
roots(pol) calculates the roots of the polynomial pol. The polynomial is given by the vector of its coefficients, highest power first, while the result is a column vector.
With a matrix as argument, roots(M) calculates the roots of the polynomials corresponding to each column of M. An optional second argument is used to specify in which dimension roots operates (1 for columns, 2 for rows). The roots of the i:th polynomial are in the i:th column of the result, whatever the value of dim is.
roots([1, 0, -1]) 1 -1 roots([1, 0, -1]') 1 -1 roots([1, 1; 0, 5; -1, 6]) 1 -2 -1 -3 roots([1, 0, -1]', 2) []
The Laguerre algorithm is used for fast evaluation. The price to pay is a suboptimal precision for multiple roots and/or high-order (typically >30) polynomials.
Sum of the elements of a vector.
x = sum(v) v = sum(M) v = sum(M,dim)
sum(v) returns the sum of the elements of vector v. sum(M) returns a row vector whose elements are the sums of the corresponding columns of matrix M. sum(M,dim) returns the sum of matrix M along dimension dim; the result is a row vector if dim is 1, or a column vector if dim is 2.
sum(1:5) 15 sum((1:5)') 15 sum([1,2,3;5,6,7]) 6 8 10 sum([1,2,3;5,6,7],1) 6 8 10 sum([1,2,3;5,6,7],2) 6 18
Singular value decomposition.
s = svd(M) (U,S,V) = svd(M) (U,S,V) = svd(M,false)
The singular value decomposition (U,S,V) = svd(M) decomposes the m-by-n matrix M such that M = U*S*V', where S is an m-by-n diagonal matrix with decreasing positive diagonal elements (the singular values of M), U is an m-by-m unitary matrix, and V is an n-by-n unitary matrix. The number of non-zero diagonal elements of S (up to rounding errors) gives the rank of M.
When m>n, (U,S,V) = svd(M) produces an n-by-n diagonal matrix S and an m-by-n matrix U. The relationship M = U*S*V' still holds.
With one output argument, s = svd(M) returns the vector of singular values.
The singular values of M can also be computed with s = sqrt(eig(M'*M)), but svd is faster and more robust.
(U,S,V)=svd([1,2;3,4]) U = 0.4046 0.9145 0.9145 -0.4046 S = 5.465 0 0 0.366 V = 0.576 -0.8174 0.8174 0.576 U*S*V' 1 2 3 4 svd([1,2;1,2]) 3.1623 3.4697e-19
Unless there is a single output argument s, the input argument M must be real.
Trace of a matrix.
tr = trace(M)
trace(M) returns the trace of the matrix M, i.e. the sum of its diagonal elements.
trace([1,2;3,4]) 5