GV1028 Videogan Save

Implementation of "Generating Videos with Scene Dynamics" in Tensorflow

Project README

Generating Videos with Scene Dynamics

Introduction

This repository contains an implementation of "Generating Videos with Scene Dynamics" in Tensorflow. The paper can be found here (http://carlvondrick.com/tinyvideo/paper.pdf). The model learns to generate a video by upsampling from some latent space, using adversarial training.

Requirements

For running this code and reproducing the results, you need the following packages. Python 2.7 has been used.

Packages:

  • TensorFlow
  • NumPy
  • cv2
  • scikit-video
  • scikit-image

VideoGAN - Architecture and Working

Attached below is the architecture used in the paper paper.
Video_GAN

Usage

Place the videos inside a folder called "trainvideos".
Run main.py with the required values for each flag variable.

Results

Below are some of the results on the model trained on MPII Cooking Activities dataset.

Real videos


Generated videos


Acknowledgements

Open Source Agenda is not affiliated with "GV1028 Videogan" Project. README Source: GV1028/videogan
Stars
76
Open Issues
8
Last Commit
6 years ago
Repository

Open Source Agenda Badge

Open Source Agenda Rating