Flmatrix: Floating Point Matrices
(require flmatrix) | package: base |
This manual documents the matrix library flmatrix.
1 Introduction
A matrix is a rectangular arrangements of numbers in rows and columns. This library provides functions to construct and compute with matrices whose elements are IEEE double precision floating point numbers. These numbers are referred to as flonums in the Racket manual, but the most common name for these numbers are simply doubles.
Restricting the scope of the library to dense matrices with floating numbers allow the implementation to use routines implemented in Fortran and C. The low-level routines consists of calls to functions in BLAS and LAPACK. BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra PACKage) are industry standard libraries and are available on all major platforms.
If you are in need of matrix algebra over more general numbers then look at the functional matrix library in Matrices and Linear Algebra.
This library can be used in a functional manner, but imperative operations are available. There are at least two reasons to use the imperative approach: 1) text books often describe matrix algorithms using imperative operations, and, 2) imperative operations can reduce the amount of memory needed during a computation.
Level 1: High level - do what I mean
Level 2: Medium level - do what I mean this way
Level 3: Low level - do it using this underlying C-function
To use the library one can get by with level 1 operations, but if you understand the underlying representation, you can improve your algorithms using level 2 operations. For those that really want to squeeze out the last bit of performance we have made level 3 operations available as well.
2 Quick Tutorial
This section shows how to do simple matrix computations. The beginning part of the tutorial describes working with matrices simply as arrays of numbers. The end shows how to do linear algebra.
2.1 Basic Properties
An flmatrix consists conceptually of a two-dimensional array of floating point numbers. An m\times n (m by n) matrix is divided into m rows and n columns.
The basic properties of an flmatrix can be examined using these functions:
(shape A) return a list of with the number of rows and columns
(size A) the number of elements in the matrix
(nrows A) the number of rows
(ncols A) the number of columns
> (define A (flmatrix: [[1 2 3] [4 5 5]])) > (shape A) '(2 3)
> (size A) 6
> (nrows A) 2
> (ncols A) 3
2.2 Basic Indexing
Since a matrix is divided into rows and columns we can refer to an element in the matrix by row and column numbers. The element on the i’th row and j’th column is referred to as the element with index (i,j).
The indices are zero-based so a matrix with m rows and n columns has row-indices 0, 1, \ldots, m-1 and column-indices 0, 1, \ldots n-1.
(ref A i j) the element in A with index (i,j)
(row A i) the i’th row of A
(col A j) the j’th column of A
Notice that row and column vectors are simply matrices with a single row and a single column respectively
> (define A (flmatrix: [[1 2 3] [4 5 5]])) > (ref A 0 1) 2.0
> (row A 0) (flmatrix: ((1.0 2.0 3.0)))
> (col A 1) (flmatrix: ((2.0) (5.0)))
2.3 Matrix Creation
There are several ways of creating matrices.
Use matrix to create an flmatrix from existing Racket data. It can convert vector-of-vector-of and list-of-lists representation of matrices into the flmatrix representation. A vector of numbers or a list of numbers will be converted into a column vector (a matrix with only one column).
Any non-floating point numbers will be converted to floating point. The function matrix also accepts f64vectors as input.
(matrix obj) create a matrix with values from obj
> (matrix '[[1/2 1/3] [4 5]]) (flmatrix: ((0.5 0.3333333333333333) (4.0 5.0)))
> (matrix #[#[1 2 3] #[4 5 6]]) (flmatrix: ((1.0 2.0 3.0) (4.0 5.0 6.0)))
> (matrix (list 1 2 3)) (flmatrix: ((1.0) (2.0) (3.0)))
> (matrix (vector 1 2 3)) (flmatrix: ((1.0) (2.0) (3.0)))
> (matrix (f64vector 1 2 3)) (flmatrix: ((1.0) (2.0) (3.0)))
After conversion the created flmatrix will contain a pointer to piece newly allocated memory containing the floating point numbers. If you happen to work with data in the form of f64vectors, then you can avoid the allocation, if you use matrix! instead. If the same f64vector is used to create two matrices with matrix! they will share the same backing array - so setting an element one matrix will affect the other.
(matrix! obj) create a matrix with values from obj avoid allocation of backing array if possible
In order to create a matrix of specific size with all zeros or all ones, use the functions zeros and ones. Use eye to make matrix with ones on a diagonal.
(zeros n) create a square n\times n matrix with all zeros
(zeros m n) create a m\times n matrix with all zeros
(ones n) create a square n\times n matrix with all ones
(ones m n) create a m\times n matrix with all ones
(eye m n k) create a m\times n matrix with ones on the k’th diagonal
The arguments n and k are optional for eye and defaults to m and 0 respectively.
> (zeros 2) (flmatrix: ((0.0 0.0) (0.0 0.0)))
> (zeros 2 3) (flmatrix: ((0.0 0.0 0.0) (0.0 0.0 0.0)))
> (ones 2) (flmatrix: ((1.0 1.0) (1.0 1.0)))
> (ones 2 3) (flmatrix: ((1.0 1.0 1.0) (1.0 1.0 1.0)))
> (list (eye 3) (eye 3 4) (eye 3 3 1) (eye 3 3 -1))
'((flmatrix: ((1.0 0.0 0.0) (0.0 1.0 0.0) (0.0 0.0 1.0)))
(flmatrix: ((1.0 0.0 0.0 0.0) (0.0 1.0 0.0 0.0) (0.0 0.0 1.0 0.0)))
(flmatrix: ((0.0 1.0 0.0) (0.0 0.0 1.0) (0.0 0.0 0.0)))
(flmatrix: ((0.0 0.0 0.0) (1.0 0.0 0.0) (0.0 1.0 0.0))))
To create ranges of values use arange or colarange which both work like (matrix (range start stop step)), but avoids build an intermediary list. The functions arange and colarange produce row and column vectors respectively. The vector created has length (ceiling (/ (- stop start) step)).
(arange start stop step) create a row vector with values from start to stop (exclusively), here step is the gap between values
(arange start stop) like (arange start stop 1.0)
(arange stop) like (arange 0.0 stop 1.0)
(colarange start stop step)
(colarange start stop)
(colarange start)
like arange but produces a column vector.
> (arange 5 10 2) (flmatrix: ((5.0 7.0 9.0)))
> (arange 5 10) (flmatrix: ((5.0 6.0 7.0 8.0 9.0)))
> (arange 5) (flmatrix: ((0.0 1.0 2.0 3.0 4.0)))
> (colarange 5 10 2) (flmatrix: ((5.0) (7.0) (9.0)))
> (colarange 5 10) (flmatrix: ((5.0) (6.0) (7.0) (8.0) (9.0)))
> (colarange 5) (flmatrix: ((0.0) (1.0) (2.0) (3.0) (4.0)))
Sometimes it is possible to keep the elements of matrix, but change its shape.
(reshape A m n)
return a matrix with shape m\times n using the elements of A,
(reshape! A m n)
return a matrix with shape m\times n using the elements of A, share the backing area with A
> (arange 9) (flmatrix: ((0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0)))
> (reshape (arange 9) 3 3) (flmatrix: ((0.0 3.0 6.0) (1.0 4.0 7.0) (2.0 5.0 8.0)))
> (transpose (reshape (arange 9) 3 3)) (flmatrix: ((0.0 1.0 2.0) (3.0 4.0 5.0) (6.0 7.0 8.0)))
As an alternative to arange consider using linspace, which allow you to provide an exact endpoint.
(linspace start stop num)
return a column vector with num numbers evenly spaced from
start to stop
(linspace start stop num #f)
like (linspace start stop num) but omit the last number
> (linspace 2 4 6) (flmatrix: ((2.0) (2.4) (2.8) (3.2) (3.6) (4.0)))
> (linspace 2 4 6 #f) (flmatrix: ((2.0) (2.4) (2.8) (3.2) (3.6)))
2.4 Elementwise Operations
Elementwise operations (also called pointwise operations) work on each element. The operations are named with a beginning point. Beside the elementwise versions of the standard arithmetic operations, the standard numerical functions also has an elementwise counterpart. Binary operators work both on matrices (of the same side) and on a number and matrix.
(.+ A B)
(.- A B) and (.- A)
(.* A B)
(./ A B) and (./ A)
Elementwise version of the arithmetical operations.
The operations returns the result as a new matrix.
Note that .* is elementwise multiplication. Use times to multiply two matrices in the linear algebra sense.
> (define A (matrix '((0 1) (2 3)))) > (define B (matrix '((4 5) (6 7)))) > (.- A) (flmatrix: ((-0.0 -1.0) (-2.0 -3.0)))
> (./ A) (flmatrix: ((+inf.0 1.0) (0.5 0.3333333333333333)))
> (.+ A B) (flmatrix: ((4.0 6.0) (8.0 10.0)))
> (.- A B) (flmatrix: ((-4.0 -4.0) (-4.0 -4.0)))
> (.* A B) (flmatrix: ((0.0 5.0) (12.0 21.0)))
> (./ A B) (flmatrix: ((0.0 0.2) (0.3333333333333333 0.42857142857142855)))
> (define A (matrix '((0 1) (2 3)))) > (.+ A 1) (flmatrix: ((1.0 2.0) (3.0 4.0)))
> (.- A 2) (flmatrix: ((-2.0 -1.0) (0.0 1.0)))
> (.* 3 B) (flmatrix: ((12.0 15.0) (18.0 21.0)))
> (./ A 4) (flmatrix: ((0.0 0.25) (0.5 0.75)))
The elementwise versions of the standard numerical functions are:
(.sin A) (.cos A) (.tan A) (.exp A) (.log A) (.sqr A) (.sqrt A) (.expt A B)
> (define A (matrix '((0 1) (2 3)))) > (.sqr A) (flmatrix: ((0.0 1.0) (4.0 9.0)))
> (.expt A 2) (flmatrix: ((0.0 1.0) (4.0 9.0)))
> (.expt 2 A) (flmatrix: ((1.0 2.0) (4.0 8.0)))
The elementwise operations above all, allocate a new matrix. If instead you want to modify the elements of an existing matrix, the following functions are for you.
(.-! A)
(./! A)
(.sin! A)
(.cos! A)
(.tan! A)
(.exp! A)
(.log! A)
(.sqr! A)
(.sqrt! A)
(.expt! A B)
(.+! A B)
(.-! A B)
(.*! A B)
(./! A B)
> (define A (matrix '((0 1) (2 3)))) > (.-! A) (flmatrix: ((-0.0 -1.0) (-2.0 -3.0)))
> (.-! A) (flmatrix: ((0.0 1.0) (2.0 3.0)))
> (.expt! A 2) (flmatrix: ((0.0 1.0) (4.0 9.0)))
> (.expt! A 2) (flmatrix: ((0.0 1.0) (16.0 81.0)))
Also, if you want to store the result of an elementwise in another matrix C, you can do as follows for the unary operations:
(.sin! A C) (.cos! A C) (.tan! A C) (.exp! A C) (.log! A C) (.sqr! A C) (.sqrt! A C)
And for the binary operations:
Finally, for .-! and ./! which are both unary and binary operations at once, use #f as B to get the unary version.
> (define A (matrix '((0 1) (2 3)))) > (define B (matrix '((4 5) (6 7)))) > (.-! B #f A) (flmatrix: ((-4.0 -5.0) (-6.0 -7.0)))
> A (flmatrix: ((-4.0 -5.0) (-6.0 -7.0)))
2.5 Indexing, Submatrices and Iterating
From the section on Basic Indexing we know that the element on row i in column j, has index (i,j) and can be extracted with the function ref.
The i’th row and the j’th column can be extraced with row and col respectively.
To get a submatrix use sub and sub!.
(sub A i j m n)
Make a copy of the submatrix of A with upper left corner in (i,j) and with size mxn.
(sub! A i j m n)
Same as sub, but the elements are copied - the underlying
array of flonums are shared.
> (define A (transpose (reshape (arange 25) 5 5))) > A
(flmatrix:
((0.0 1.0 2.0 3.0 4.0)
(5.0 6.0 7.0 8.0 9.0)
(10.0 11.0 12.0 13.0 14.0)
(15.0 16.0 17.0 18.0 19.0)
(20.0 21.0 22.0 23.0 24.0)))
> (sub A 0 0 3 2) (flmatrix: ((0.0 1.0) (5.0 6.0) (10.0 11.0)))
> (sub A 1 1 2 2) (flmatrix: ((6.0 7.0) (11.0 12.0)))
The function sub! can be used to mutate part of a larger submatrix.
Let’s say we have a matrix, in which we want to zero out all elements except those on the edges. We can use sub! to get a submatrix of the inner part, then use zeros! to clear the elements.
> (define A (transpose (reshape (arange 10 35) 5 5))) > A
(flmatrix:
((10.0 11.0 12.0 13.0 14.0)
(15.0 16.0 17.0 18.0 19.0)
(20.0 21.0 22.0 23.0 24.0)
(25.0 26.0 27.0 28.0 29.0)
(30.0 31.0 32.0 33.0 34.0)))
> (define B (sub! A 1 1 3 3)) > B (flmatrix: ((16.0 17.0 18.0) (21.0 22.0 23.0) (26.0 27.0 28.0)))
> (zeros! B) > A
(flmatrix:
((10.0 11.0 12.0 13.0 14.0)
(15.0 0.0 0.0 0.0 19.0)
(20.0 0.0 0.0 0.0 24.0)
(25.0 0.0 0.0 0.0 29.0)
(30.0 31.0 32.0 33.0 34.0)))
To iterate over a row or a column use in-flrow and in-flcolumn.
> (define A (matrix '((11 22) (33 44)))) > (for/list ([ x (in-flrow A 0)]) x) '(11.0 22.0)
> (for/list ([(i x) (in-flrow A 0)]) (list x i)) '((11.0 0) (22.0 1))
> (for/list ([ x (in-flcolumn A 0)]) x) '(11.0 33.0)
> (for/list ([(i x) (in-flcolumn A 0)]) (list x i)) '((11.0 0) (33.0 1))
2.6 Basic Linear Algebra
The basic linear algebra are plus, minus and times, which compute the sum, difference and product of a series of matrices.
(plus A ...)
(minus A ...)
(times A ...)
Computes the sum, difference and product of a series of matrices and/or numbers.
> (define A (matrix '((2 0) (0 2)))) > (define B (matrix '((1 2) (3 4)))) > (define C (column 4 5)) > (plus A B) (flmatrix: ((3.0 2.0) (3.0 6.0)))
> (plus A 10) (flmatrix: ((12.0 10.0) (10.0 12.0)))
> (plus A 10 B) (flmatrix: ((13.0 12.0) (13.0 16.0)))
> (minus A) (flmatrix: ((-2.0 -0.0) (-0.0 -2.0)))
> (minus A B) (flmatrix: ((1.0 -2.0) (-3.0 -2.0)))
> (times A B) (flmatrix: ((2.0 4.0) (6.0 8.0)))
> (times A 2 B) (flmatrix: ((4.0 8.0) (12.0 16.0)))
> (times A C) (flmatrix: ((8.0) (10.0)))
As usual, there are variants that mutate the first given matrix instead of allocating a new backing array of flonums.
(plus! A B ...)
(minus! A B ...)
Like plus and minus but stores
the result in A, which must be a matrix.
> (define A (matrix '((2 0) (0 2)))) > (define B (matrix '((0 2) (2 0)))) > (plus! A B) (flmatrix: ((2.0 2.0) (2.0 2.0)))
> A (flmatrix: ((2.0 2.0) (2.0 2.0)))
(power A n)
Computes the n’th power of a matrix A, where n is a natural number.
> (define A (matrix '((1 1) (0 1)))) > (list (power A 0) (power A 1) (power A 2) (power A 3))
'((flmatrix: ((1.0 0.0) (0.0 1.0)))
(flmatrix: ((1.0 1.0) (0.0 1.0)))
(flmatrix: ((1.0 2.0) (0.0 1.0)))
(flmatrix: ((1.0 3.0) (0.0 1.0))))
2.7 Matrix and Vector Products
The inner product (also known as the dot product) of two column vectors can be computed by dot.
(dot v w)
Computes the inner product of two column vectors (i.e. matrices with only one column).
The outer product of a column vector A with m rows and an row B with n columns is an m\times n matrix O with elements o_ij = a_i\cdot b_j.
(outer A B)
Computes the outer product of the first column of A and the first row of B.
> (define A (column 2 3)) > (define B (transpose (column 5 7))) > (outer A B) (flmatrix: ((10.0 14.0) (15.0 21.0)))
The Kronecker product between two matrices A and B replaces each element a of A with a copy of B scaled with A. The Kronecker product is a generalization of the outer product.
(kron A B)
Computes the Kronecker product of the matrices A and #(struct:traverse-element #<procedure:.../render-cond.rkt:38:12>).
> (define A (matrix '((1 2) (3 4)))) > (define B (matrix '((1 1) (1 1)))) > (kron A B)
(flmatrix:
((1.0 1.0 2.0 2.0) (1.0 1.0 2.0 2.0) (3.0 3.0 4.0 4.0) (3.0 3.0 4.0 4.0)))
2.8 Matrix Decompositions
(cholesky A) (qr A) (svd A)
Computes the Cholesky, QR and SVD decompositions respectively.
The Singular Value Decomposition (SVD) returns three matrices: a unitary matrix U, a column vector of singular values S and a unitary matrix V^T (V transposed). The function diag constructs a diagonal matrix from the singular values.
> (define A (matrix '((1 2) (3 4)))) > (define-values (U S VT) (svd A)) > (define Σ (diag S)) > (list U Σ VT S)
'((flmatrix:
((-0.404553584833757 -0.9145142956773046)
(-0.9145142956773046 0.40455358483375703)))
(flmatrix: ((5.464985704219043 0.0) (0.0 0.36596619062625757)))
(flmatrix:
((-0.5760484367663209 -0.8174155604703632)
(0.8174155604703632 -0.5760484367663209)))
(flmatrix: ((5.464985704219043) (0.36596619062625757))))
> (times U Σ VT) (flmatrix: ((1.0000000000000004 2.0) (3.000000000000001 4.0)))
The QR Decomposition of A consists of two matrices: an orthogonal matrix Q and an upper triangular matrix R such that A=QR.
> (define A (matrix '((1 2) (3 4)))) > (define-values (Q R) (qr A)) > (list Q R)
'((flmatrix:
((-0.316227766016838 -0.9486832980505138)
(-0.9486832980505138 0.31622776601683805)))
(flmatrix:
((-3.1622776601683795 -4.427188724235731) (0.0 -0.6324555320336754))))
> (times Q R) (flmatrix: ((1.0000000000000002 2.0) (3.0 4.0)))
If the matrix A is symmetric and positive-definite, then the Cholesky decomposition can be computed. It comes in two forms
where L and U are lower and upper triangular matrices.
Note: cholesky does not check that the input matrix A is symmetric and positive definite.
> (define A (matrix '((1 2) (2 4)))) > (define L (cholesky A)) > (list L (transpose L)) '((flmatrix: ((1.0 0.0) (2.0 0.0))) (flmatrix: ((1.0 2.0) (0.0 0.0))))
> (times L (transpose L)) (flmatrix: ((1.0 2.0) (2.0 4.0)))
> (define U (cholesky A 'upper)) > (list (transpose U) U) '((flmatrix: ((1.0 0.0) (2.0 0.0))) (flmatrix: ((1.0 2.0) (0.0 0.0))))
> (times (transpose U) U) (flmatrix: ((1.0 2.0) (2.0 4.0)))
2.9 Matrix Eigenvalues and Eigenvectors
Eigenvalues and eigenvectors of a square matrix can be computed with eig or, if only the eigenvalues are needed, with eigvals. Note that even if all elements of a matrix are real, the eigenvalues in some cases are complex. Therefore the eigenvalues are returned as a standard Racket vector.
(eig A)
Compute eigenvalues and right eigenvectors.
(eigvals A)
Compute eigenvalues.
> (eig (diag '(1 2)))
'#(1.0 2.0)
(flmatrix: ((1.0 0.0) (0.0 1.0)))
> (eigvals (diag '(1 2))) '#(1.0 2.0)
> (eig (matrix '((1 -1) (1 1))))
'#(1.0+1.0i 1.0-1.0i)
(flmatrix: ((0.7071067811865475 0.0) (0.0 -0.7071067811865475)))
2.10 Norms and Invariants
The standard 2-norm |\cdot|_2 can be computed by norm. For a column vector the norm is sometimes referred to as the length.
(norm A)
Compute the square root om the sum of the square of all elements.
> (norm (matrix '((1 1)))) 1.4142135623730951
> (norm (matrix '((1 -1) (-1 1)))) 2.0
(det A)
Computes the determinant of a square matrix A.
> (det (matrix '((1 2) (0 4)))) 4.0
> (det (matrix '((1 1) (2 2)))) -0.0
(trace A)
Computes the trace, the sum along a diagonal, of a matrix.
> (trace (matrix '((1 2) (0 4)))) 5.0
(rank A)
Computes the rank of a square matrix.
The rank is the dimension of the column space,
which is equal to the dimension of the row space,
which is equal to the number of non-zero singular values
in an SVD decomposition.
> (rank (matrix '((1 2) (0 4)))) 2
> (rank (matrix '((1 1) (2 2)))) 1
2.11 Solving Equations and Inverting Matrices
Solving linear equations are more or less the raison d’etre for matrices. The main workhorse is mldivide, which can solve for X in the equation:
where A is a an m\times m matrix, and both X and B are mxn.
Note that A needs to be of full rank for the equation to have a solution. The solver doesn’t check that the input matrix has full rank, it just runs it computation as usual. To check that the output from solve is indeed a solution, you can evaluate (times A X) and compare with B. The name mldivide is short for "Matrix Left divide". Although mldivide doesn’t find X by multiplying B with A^{-1} on the left, it is fitting analogy.
(mldivide A B)
Solve the equation AX = B using LU-decomposition with
partial pivoting. The matrix A must be square and of full rank, the number
of rows in A must be the same as the number columns in B.
> (define A (matrix '((1 2) (3 4)))) > (define B (matrix '((1) (0)))) > (define X (mldivide A B)) (flmatrix-solve-many! 0)
> (list X (times A X))
'((flmatrix: ((-1.9999999999999996) (1.4999999999999998)))
(flmatrix: ((1.0) (8.881784197001252e-16))))
(mrdivide B A)
Solve the equation XA = B.
The name mrdivide is short for "Matrix Right divide".
> (define A (matrix '((1 2) (3 4)))) > (define B (matrix '((2 4) (6 8)))) > (define X (mrdivide B A)) (flmatrix-solve-many! 0)
> (list X (times X A)) '((flmatrix: ((2.0 0.0) (0.0 2.0))) (flmatrix: ((2.0 4.0) (6.0 8.0))))
(inv A)
Find the multiplicative inverse of a square matrix A.
> (define B (matrix '((1) (0)))) > (define X (times Ainv B)) > (list X (times A X))
'((flmatrix: ((-1.9999999999999996) (1.4999999999999998)))
(flmatrix: ((1.0) (8.881784197001252e-16))))
(pinv A)
Find the Moore-Penrose pseudo-inverse A^+of the matrix A.
The matrix A does not need to be square.
The pseudo inverse of an m\times n matrix is of size n\times m.
> (define A (matrix '((1 2) (3 4)))) > (define A+ (pinv A)) > (list A+ (times A+ A A+) (times A A+ A))
'((flmatrix:
((-2.0000000000000013 1.0000000000000009)
(1.5000000000000013 -0.5000000000000007)))
(flmatrix:
((-2.0000000000000027 1.0000000000000018)
(1.5000000000000027 -0.5000000000000013)))
(flmatrix: ((1.0 2.000000000000001) (3.0000000000000018 4.0000000000000036))))
> (define B (matrix '((1 2 3) (4 5 6)))) > (define B+ (pinv B)) > (list B+ (times B+ B B+) (times B B+ B))
'((flmatrix:
((-0.9444444444444439 0.4444444444444441)
(-0.1111111111111109 0.11111111111111101)
(0.7222222222222218 -0.22222222222222193)))
(flmatrix:
((-0.9444444444444434 0.44444444444444375)
(-0.1111111111111108 0.11111111111111097)
(0.7222222222222211 -0.2222222222222216)))
(flmatrix:
((1.0000000000000009 2.000000000000001 3.000000000000001)
(3.999999999999999 4.999999999999999 5.999999999999999))))
2.12 Least Square Problems
Let A be an m\times n matrix and let b be an n\times 1 column vector. The equation Ax=b may (depending on A) may not have an unique solution - or a solution at all.
As an alternative, one can look for the vector x that minimizes:
The function lstsq return the minimum norm solution x of the above the problem.
If lstsq is given an n\times k matrix B, then the problem will be solved for each column b of B.
(lstsq A B)
Find minimum norm solution to the least squares problem: "minimize |Ax-b|" ,
for each column b of a larger matrix B.
As an example, let’s look at estimating b_0 and b_1 in the model:
The matrix X is called the design matrix of the problem. See Design Matrix at Wikipedia. In this case the design matrix has two columns: the first has the x-values, the second contains just ones.
> (define xs (column 0 1 2 3)) > (define ys (column 1 3 5 7)) > (define X (augment xs (flmatrix-ones (nrows xs) 1))) > X (flmatrix: ((0.0 1.0) (1.0 1.0) (2.0 1.0) (3.0 1.0)))
> (define B (lstsq X ys)) > B (flmatrix: ((1.9999999999999996) (0.9999999999999992)))
3 Installation
The package flmatrix is installed either in the terminal:
raco pkg install flmatrix
or using the Package Manager in DrRacket.
The package relies on the shared libraries CBLAS and LAPACK. Depending on your OS, you might need to install these yourself.
On macOS both CBLAS and LAPACK is part of the Accelerate Framework which is distributed by Apple. This means no extra installation is needed.
On Linux you need copies of CBLAS and LAPACK. Since BLAS and LAPACK exists in multiple versions, so a little care is needed. First on most systems libblas is used for the Fortran version, and libcblas, so get the latter. However on Debian it turns out libblas is exporting the names used by CBLAS, so (either?) ought to be fine.
On Windows: A tester is needed. Install CBLAS and LAPACK and let me know if it works. Otherwise make an Issue at Github and we will add the proper paths.