Evotorch Versions Save

Advanced evolutionary computation library built directly on top of PyTorch, created at NNAISENSE.

v0.5.1

6 months ago

Fixes

v0.5.0

6 months ago

New Features

Fixes

v0.4.1

1 year ago

Fixes

Docs

v0.4.0

1 year ago

New Features

Fixes

Docs

v0.3.0

1 year ago

New

Vectorized gym support: Added a new problem class, evotorch.neuroevolution.VecGymNE, to solve vectorized gym environments. This new problem class can work with brax environments and can exploit GPU acceleration (#20).

PicklingLogger: Added a new logger, evotorch.logging.PicklingLogger, which periodically pickles and saves the current solution to the disk (#20).

Python 3.7 support: The Python dependency was lowered from 3.8 to 3.7. Therefore, EvoTorch can now be imported from within a Google Colab notebook (#16).

API Changes

@pass_info decorator: When working with GymNE (or with the newly introduced VecGymNE), if one uses a manual policy class and wishes to receive environment-related information via keyword arguments, that manual policy now needs to be decorated via @pass_info, as follows: (#27)

from torch import nn
from evotorch.decorators import pass_info

@pass_info
class CustomPolicy(nn.Module):
    def __init__(self, **kwargs):
        ...

Recurrent policies: When defining a manual recurrent policy (as a subclass of torch.nn.Module) for GymNE or for VecGymNE, the user is now required to define the forward method of the module according to the following signature:

def forward(self, x: torch.Tensor, h: Any = None) -> Tuple[torch.Tensor, Any]:
    ...

Note: The optional argument h is the current state of the network, and the second value of the output tuple is the updated state of the network. A reset() method is not required anymore, and it will be ignored (#20).

Fixes

Fixed a performance issue caused by the undesired cloning of the entire storages of tensor slices (#21).

Fixed the signature and the docstrings of the overridable method _do_cross_over(...) of the class evotorch.operators.CrossOver (#30).

Docs

Added more example scripts and updated the related README file (#19).

Updated the documentation related to GPU usage with ray (#28).

v0.2.0

1 year ago

Fixes:

  • Fix docstrings in gaussian.py (#11) (@engintoklu)
  • Fix for str_to_net(...) (#12) (@engintoklu)
  • Hard-code network_device property to CPU for GymNE (#6) (@NaturalGradient)

Docs:

  • Fix comment in the Gym experiments notebook (#5) (@engintoklu)
  • Improve code formatting in docstrings (#3) (@flukeskywalker)
  • Add documentation of NeptuneLogger class (#15) (@NaturalGradient)

v0.1.1

1 year ago

What's changed

  • Re-arrange pip dependencies to make the default installation of EvoTorch runnable in most scenarios
  • Add docs badge and landing page link to the README
  • Fix broken links in PyPI

v0.1.0

1 year ago

We are excited to release the first public version of EvoTorch - an evolutionary computation library created at NNAISENSE.

With EvoTorch, one can solve various optimization problems, without having to worry about whether or not these problems at hand are differentiable. Among the problem types that are solvable with EvoTorch are:

  • Black-box optimization problems (continuous or discrete)
  • Reinforcement learning tasks
  • Supervised learning tasks
  • etc.

Various evolutionary computation algorithms are available in EvoTorch:

  • Distribution-based search algorithms:
    • PGPE: Policy Gradients with Parameter-based Exploration.
    • XNES: Exponential Natural Evolution Strategies.
    • SNES: Separable Natural Evolution Strategies.
    • CEM: Cross-Entropy Method.
  • Population-based search algorithms:
    • SteadyStateGA: A fully elitist genetic algorithm implementation. Also supports multiple objectives, in which case behaves like NSGA-II.
    • CoSyNE: Cooperative Synapse Neuroevolution.

All these algorithms mentioned above are implemented in PyTorch, and therefore, can benefit from the vectorization and GPU capabilities of PyTorch. In addition, with the help of the Ray library, EvoTorch can further scale up these algorithms by splitting the workload across:

  • multiple CPUs
  • multiple GPUs
  • multiple computers over a Ray cluster