:zap: :zap: ???? ?? ??????????? ???? ??? ???
Deep reinforcement learning multi-agent algorithmic trading framework that learns to trade from experience and then evaluate with brand new data
This repository is no longer maintaned
Other versions are running on private mode
An API Key on CryptoCompare
# paste your API Key on .env
cp .env.example .env
# make sure you have these installed
sudo apt-get install gcc g++ build-essential python-dev python3-dev -y
# create env
conda env create -f t-1000.yml
# activate it
conda activate t-1000
# to see all arguments available
# $ python main.py --help
# to train
python main.py -a btc eth bnb -c usd
# to test
python main.py /
--checkpoint_path results/t-1000/model-hash/checkpoint_750/checkpoint-750
# instatiate the environment
T_1000 = CreateEnv(assets=['OMG','BTC','ETH'],
currency='USDT',
granularity='day',
datapoints=600)
# define the hyperparams to train
T_1000.train(timesteps=5e4,
checkpoint_freq=10,
lr_schedule=[
[
[0, 7e-5], # [timestep, lr]
[100, 7e-6],
],
[
[0, 6e-5],
[100, 6e-6],
]
],
algo='PPO')
Once you have a sattisfatory reward_mean benchmark you can see how it performs with never seen data
# same environment
T_1000 = CreateEnv(assets=['OMG','BTC','ETH'],
currency='USDT',
granularity='day',
datapoints=600)
# checkpoint are saved in /results
# it will automatically use a different time period from trainnig to backtest
T_1000.backtest(checkpoint_path='path/to/checkpoint_file/checkpoint-400')
"It just needs to touch something to mimic it." - Sarah Connor, about the T-1000
Some nice tools to keep an eye while your agent train are (of course) tensorboard
, gpustat
and htop
# from the project home folder
$ tensorboard --logdir=models
# show how your gpu is going
$ gpustat -i
# show how your cpu and ram are going
$ htop