Adds MortonNDLutDecoder to bring fast Morton decoding to non-Intel CPU variants.
Adds unit testing for MortonNDLutDecoder.
Header mortonND_LUT_encoder.h is renamed to mortonND_LUT.h and also includes MortonNDLutDecoder.
v3.0.0
4 years ago
LUT Encoder
Adds native support for 128-bit encode type / result.
Adds support for user-defined encode types, which should allow for a "big integer"-like class to be used when > 128-bit encodings are desired. Note: this is currently untested. A future minor revision will introduce a sample project which uses such a class.
Make LutValue type public in MortonNDLutEncoder to make debugging easier.
Adds type aliases for common configurations: 2D and 3D in 32 and 64 bits.
InputMask is now a constexpr function.
Encoding type (T) is now required to be >= 32 bit a signed integer, or a user-provided type. Previously, integer promotion of smaller encoding types could cause incorrect results.
Limits LutBits value to 32 to prevent users from requesting something too unreasonable.
Encoding type is now a fast integer when left up to automatic selection.
LUT value type is now separate from encoding type and is always automatically selected as the smallest unsigned integer type which can fit LUT values (Dimensions * LutBits). Note that it's therefore no longer possible to specify a user-defined type as the LUT value type.
Added interface documentation.
Encode method parameter count is now enforced by a static assertion rather than SFINAE. This provides a better error message when a user provides the incorrect number of fields.
Fixes a bug where ChunkCount could be over-shifted when LutBits was equal to the width of ChunkCount.
Fixes a bug where a LUT value could be over-shifted during calculation.
BMI Encoder / Decoder
Adds type aliases for common configurations: 2D and 3D in 32 and 64 bits.
Fixes a bug in which the Selector could be over-shifted during calculation.