AlphaGo Zero Gobang Save

Meta-Zeta是一个基于强化学习的五子棋(Gobang)模型,主要用以了解AlphaGo Zero的运行原理的Demo,即神经网络是如何指导MCTS做出决策的,以及如何自我对弈学习。源码+教程

Project README

AlphaGo-Zero-Gobang

  • Do you like to play Gobang ?
  • Do you want to know how AlphaGo Zero works ?
  • Check it out!

You can also read my Blog :)

View a Demo

这是一个基于强化学习的自我博弈模型,运行后的程序如下所示。


Quick Start

python3 MetaZeta.py

Train

我们构建了一个基于MCTS进行决策的 AI玩家,由残差神经网络辅助预测落子。

  • 操作:点击 AI 自我对弈,在右上角点击 开始

Test

我们可以和训练有素的 AI玩家 对弈,以测试 AI 的下棋水平。

  • 操作:点击 与 AI对战,在右上角点击 开始

Environment

  • Ubuntu 18.04.6 LTS
  • tensorflow-gpu==2.6.2

File Structure

filename type description
TreeNode.py MCTS nodes of the MCTS decision tree
MCTS.py MCTS Build MCTS decision tree
AIplayer.py MCTS Build an AI based on MCTS+NN
Board.py Board store board information
Game.py Board defines the game process for selfPlay and play-with-Human
PolicyNN.py NN constructs a residual neural network
MetaZeta.py Main GUI synthesis for all parties All in one

How it works (with code explanation)

1. Board design

首先,我们需要设计一些规则来描述棋盘上的信息

2. Residual Neural Network

然后,我们需要建立一个残差神经网络 (Network structure)

3. MCTS ✨✨✨

然后,我们需要了解 AI 是如何做出决策的。他是如何积累下棋的知识,并利用学到的知识进行下棋的

4. Reinforcement Learning

最后,我们需要了解强化学习的整个过程(即自我对弈 )

Open Source Agenda is not affiliated with "AlphaGo Zero Gobang" Project. README Source: YoujiaZhang/AlphaGo-Zero-Gobang
Stars
63
Open Issues
5
Last Commit
1 year ago
License
MIT

Open Source Agenda Badge

Open Source Agenda Rating