Gorgonia Versions Save

Gorgonia is a library that helps facilitate machine learning in Go.

v0.9.18

5 months ago

Possibly last release of the 0.9 branch before the huge changes coming from v0.10.0

v0.9.17

3 years ago

CI

CI (GitHub actions) has a new template system that will ease the go releases' upgrade. On top of that, it now has a custom runner for ARM64. This leads to discovering and fixing a couple of issues in the tests on ARM64.

Fixes

  • Support flat weights for the BatchNorm op (#465)
  • fix the reset method of the tape machine (#467)
  • fix clipping in Adam solver (#469)
  • fix panic message in GlorotEtAlN64 (#470)
  • fix concurrent example (#472)

API change

  • functions to create primitive Value types (NewF64, NewF32, ...) (#481)
  • Breaking change: the BatchNorm1d function has been removed; BatchNorm function supports 1d and 2d operations (#482)

v0.9.16

3 years ago

This version incorporates the semantics clarification of the tensor package - the unsafe pointer things are cleaned up as well.

Small bugfixes to SoftMax was also fixed - SoftMax no longer cause a race condition.

v0.9.15

3 years ago

When vectors were broadcast with a repeat of 1, one of the values is accidentally zero'd. This leaves very strange artifacts in neural networks.

This has now been fixed

v0.9.14

3 years ago

With the release of gorgonia.org/[email protected], the tensor now supports complex numbers as well

v0.9.13

3 years ago

This references GoMachine's new implementation.

v0.9.12

3 years ago

The Upsample2D operator has been added by @cpllbstr . It is similar to the operator in PyTorch: https://pytorch.org/docs/master/generated/torch.nn.Upsample.html

v0.9.11

3 years ago

Due to the great work by @wzzhu, shape inference is now a bit more robust. It goes back to the original Gorgonia understanding of shapes - where reductions do not aggressively squeeze the dimensions.

v0.9.10

4 years ago

In the previous version, the repeatOp was a compound operation. It had this function signature effectively: func repeat(a, nTimes *Node, axes ...int). So you could do something like repeat(a, 300, 1, 2, 3) in which a gets repeated 300 times across axes 1, 2 and 3.

This has been deoptimized such that it's effectively func repeat(a, repeat *Node, axis int). The reason for this deoptimization is because upon further analyses of what the function actually does, it simply calls tensor.Repeat many times. This causes many new tensors to be allocated. But the whole point of symbolic operations is so that we may preallocate ahead of time.

This deoptimization allows for the repeatOp to call tensor.RepeatReuse which allows for a repeat operation to reuse preallocated values, leading to less allocations, improving performance

v0.9.9

4 years ago

Dropout had a long standing bug that was fixed by @MarkKremer