Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks
Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks
Python code to reproduce our DROO algorithm for Wireless-powered Mobile-Edge Computing [1], which uses the time-varying wireless channel gains as the input and generates the binary offloading decisions. It includes:
memory.py: the DNN structure for the WPMEC, inclduing training structure and test structure, implemented based on Tensorflow 1.x.
optimization.py: solve the resource allocation problem
data: all data are stored in this subdirectory, includes:
main.py: run this file for DROO, including setting system parameters, implemented based on Tensorflow 1.x
demo_alternate_weights.py: run this file to evaluate the performance of DROO when WDs' weights are alternated
demo_on_off.py: run this file to evaluate the performance of DROO when some WDs are randomly turning on/off
@ARTICLE{huang2020DROO,
author={Huang, Liang and Bi, Suzhi and Zhang, Ying-Jun Angela},
journal={IEEE Transactions on Mobile Computing},
title={Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks},
year={2020},
month={November},
volume={19},
number={11},
pages={2581-2593},
doi={10.1109/TMC.2019.2928811}
}
Liang HUANG, lianghuang AT zjut.edu.cn
Suzhi BI, bsz AT szu.edu.cn
Ying Jun (Angela) Zhang, yjzhang AT ie.cuhk.edu.hk
Tensorflow
numpy
scipy
For DROO algorithm, run the file, main.py. If you code with Tenforflow 2 or PyTorch, run mainTF2.py or mainPyTorch.py, respectively. The original DROO algorithm is coded based on Tensorflow 1.x. If you are fresh to deep learning, please start with Tensorflow 2 or PyTorch, whose codes are much cleaner and easier to follow.
For more DROO demos:
from memory import MemoryDNN
to
from memoryTF2 import MemoryDNN
or
from memoryPyTorch import MemoryDNN
if you are using Tensorflow 2 or PyTorch.