A collection of numerical methods written in Nim
meshgrid
by @HugoGranstrom in https://github.com/SciNim/numericalnim/pull/42
Full Changelog: https://github.com/SciNim/numericalnim/compare/v0.8.8...v0.8.9
The 1D interpolation methods now support extrapolation using these methods:
Constant
: Set all points outside the range of the interpolator to extrapValue
.Edge
: Use the value of the left/right edge.Linear
: Uses linear extrapolation using the two points closest to the edge.Native
(default): Uses the native method of the interpolator to extrapolate. For Linear1D it will be a linear extrapolation, and for Cubic and Hermite splines it will be cubic extrapolation.Error
: Raises an ValueError
if x
is outside the range.These are passed in as an argument to eval
and derivEval
:
let valEdge = interp.eval(x, Edge)
let valConstant = interp.eval(x, Constant, NaN)
levmarq
now accepts yError
.paramUncertainties
allows you to calculate the uncertainties of fitted parameters.chi2
test addedFull Changelog: https://github.com/SciNim/numericalnim/compare/v0.8.5...v0.8.6
Full Changelog: https://github.com/SciNim/numericalnim/compare/v0.8.4...v0.8.5
With radial basis function interpolation, numericalnim
finally gets an interpolation method which works on scattered data in arbitrary dimensions!
Basic usage:
let interp = newRbf(points, values)
let result = interp.eval(evalPoints)
Full Changelog: https://github.com/SciNim/numericalnim/compare/v0.8.3...v0.8.4
Full Changelog: https://github.com/SciNim/numericalnim/compare/v0.8.2...v0.8.3
Multi-variate optimization and differentiation has been introduced.
numericalnim/differentiate
offers tensorGradient(f, x)
which calculates the gradient of f
w.r.t x
using finite differences, tensorJacobian
(returns the transpose of the gradient), tensorHessian
, mixedDerivative
. It also provides checkGradient(f, analyticGrad, x, tol)
to verify that the analytic gradient is correct by comparing it to the finite difference approximation.numericalnim/optimize
now has several multi-variate optimization methods:
steepestDescent
newton
bfgs
lbfgs
proc bfgs*[U; T: not Tensor](f: proc(x: Tensor[U]): T, x0: Tensor[U], options: OptimOptions[U, StandardOptions] = bfgsOptions[U](), analyticGradient: proc(x: Tensor[U]): Tensor[T] = nil): Tensor[U]
where f
is the function to be minimized, x0
is the starting guess, options
contain options like tolerance (each method has it own options type which can be created by for example lbfgsOptions
or newtonOptions
), analyticGradient
can be supplied to avoid having to do finite difference approximations of the derivatives.options
: Armijo, Wolfe, WolfeStrong, NoLineSearch
.levmarq
: non-linear least-square optimizer
proc levmarq*[U; T: not Tensor](f: proc(params: Tensor[U], x: U): T, params0: Tensor[U], xData: Tensor[U], yData: Tensor[T], options: OptimOptions[U, LevmarqOptions[U]] = levmarqOptions[U]()): Tensor[U]
f
is the function you want to fit to the parameters in param
and x
is the value to evaluate the function at.params0
is the initial guess for the parametersxData
is a 1D Tensor with the x points and yData
is a 1D Tensor with the y points.options
can be created using levmarqOptions
.Note: There are basic tests to ensure these methods converge for simple problems, but they are not tested on more complex problems and should be considered experimental until more tests have been done. Please try them out, but don't rely on them for anything important for now. Also, the API isn't set in stone yet so expect that it may change in future versions.
adds the task nimCI
which is to to run by the Nim CI