RobOptim Core Layer: interface and basic mathematical tools
Problem
and Solver
are now only templated on the matrix type. See the migration guide in the next section. Impact on end-user code is limited, the biggest changes happen in plugin codes.latex
/dvips
are no longer required for building the documentation. A minimal MathJax will be bundled instead. This explains the larger tarballs.NumericQuadraticFunction
hessian,updateSparseBlock
helper function,CachedFunction
.Result
structure.SolverCallback
class to be used with the Multiplexer
.jacobian(x)
and constraintsOutputSize()
methods to Problem
.Problem
and Solver
were templated on the cost function and constraints types, for instance:// Specify the solver type that will be used
typedef Solver<DifferentiableFunction, boost::mpl::vector<LinearFunction, DifferentiableFunction> > solver_t;
// Deduce the problem type, which is Problem<DifferentiableFunction, boost::mpl::vector<LinearFunction, DifferentiableFunction> >
typedef solver_t::problem_t problem_t;
Now, this is handled at runtime, and all that remains is the matrix type used:
// Specify the solver type that will be used
typedef Solver<EigenMatrixDense> solver_t;
// Deduce the problem type, which is Problem<EigenMatrixDense>
typedef solver_t::problem_t problem_t;
Problem
constructor is now deprecated:// Instantiate the cost function
CostFunction cost (param);
// Create problem
solver_t::problem_t pb (cost);
Instead, use the boost::shared_ptr
version:
// Instantiate the cost function
boost::shared_ptr<CostFunction> f (new CostFunction (param));
// Create problem
solver_t::problem_t pb (cost);
// f is a (shared) pointer to a function, and we want to cast it to a LinearFunction
// if that is possible
LinearFunction* g = 0;
if (f->asType<LinearFunction> ())
{
g = f->castInto<LinearFunction> ();
}
The implementation thus relies on a cheap static_cast
rather than an expensive dynamic_cast
. Note that castInto
accepts a boolean parameter to enable the asType
check internally, and throws if the cast is invalid.
ColMajor
/RowMajor
support has been improved (cf. #89). Default is back to ColMajor
since this is Eigen's default mode, but that can be changed with a CMake variable.matplotlib
plotting backend (cf. #94).vector_t
and bool
to the solver parameter types. As a consequence, std::string
parameters should not rely on automatic conversion from const char*
(cf. 7a0bbb74c60dd21a9f467a20fe02df67c2dd689f). Basically: // This will be converted to bool:
parameters["key"].value = "value";
// While this will be a string:
parameters["key"].value = std::string ("value");
scale[s]*
to scaling*
(cf. 434559c940f0866724c24bc32921ee8adfcb005f). Previous methods/typedefs are currently kept for backward compatibility, but marked as deprecated.StructuredInput
helper (cf. #96).New features:
Improvements:
Eigen::Ref
: now RobOptim functions accept blocks/segments of Eigen matrices as input,CachedFunction
with LRU cache,Other changes:
void function(...)
throw()
GenericFunctionTraits
(StorageOrder
).roboptim/core/detail/utility.hh
roboptim/core/visualization/util.hh
with roboptim/core/util.hh
throw ()
exception specifiersargument_t
:argument_t&
---> argument_ref
const argument_t&
---> const_argument_ref
Same goes for vector_t
, matrix_t
, result_t
, gradient_t
, jacobian_t
and hessian_t
.
For instance, the signature of impl_compute
was:
void impl_compute (result_t& result, const argument_t& argument) const throw ();
Now, it is:
void impl_compute (result_ref result, const_argument_ref argument) const;
The reason behind this change is that we now use Eigen::Ref
, and const references become const Eigen::Ref<const xxx_t>&
, while references are Eigen::Ref<xxx_t>&
. That way, we keep signatures simple, and using Eigen::Ref
s makes it possible to avoid both temporary allocations and extra copies, thus increasing RobOptim's performance. However, note that you SHOULD NOT use these typedefs as return types, since it would return references to temporary objects.
The version 2.0 of roboptim-core now depends on Eigen for matrix computations by default. Traits allow the user to use its own type. Support for Eigen dense and sparse matrices is built-in.