Mab Versions Save

Library for multi-armed bandit selection strategies, including efficient deterministic implementations of Thompson sampling and epsilon-greedy.

v0.1.0

3 years ago

Parallelizes Thompson sampling strategy for massive speed boost!

v0.0.4

3 years ago

v0.0.3

3 years ago

Parsers now take an io.ReadCloser instead of a []byte

v0.0.2

3 years ago

Adds HTTPSource implementation of RewardSource Adds a lot of unit tests Minor tweaks to API