Custom renderer and physics engine written from scratch in C++/Direct3D 12.
This project implements a custom rendering engine build from the ground up in C++ using Direct3D 12. It supports some "new" features like raytracing, mesh shaders etc.
It also features a custom written physics engine written completely from scratch.
It has an integrated (albeit pretty simple) path tracer (using hardware-accelerated raytracing), which in the future will be integrated into the real-time pipeline in some form to compute global illumination effects.
Images to the right are links to YouTube videos showcasing the various physics features.
Since this project uses Direct3D 12 as the only rendering backend, the only supported platforms are Windows 10 or higher. The project is only tested with Visual Studio 2019 and 2022, and only on NVIDIA GPUs.
For mesh shaders you will need the Windows 10 SDK version 10.0.19041.0 or higher. This can be downloaded using the Visual Studio Installer. If you only have an older version of the SDK installed, the build system will automatically disable mesh shaders. To run you will need the Windows 10 May 2020 Update (20H1) or newer. If these requirements are not met, you should still be able to build and run the program, but without mesh shader support.
If you want to use raytracing or mesh shaders, you need a compatible NVIDIA GPU. For raytracing these are the GPUs with the Pascal architecture or newer. For mesh shaders you will need a Turing GPU or newer.
The project files are currently generated with the AVX2 instruction set. If your processor does not support this, set another instruction set (either in Visual Studio or in premake5.lua).
All other dependencies (external libraries) either come directly with the source code or in the form of submodules.
I have tried to keep the build process as simple as possible. Therefore you will not need any build tools installed on your machine. The project uses Premake, but all you need comes with the source. See also the video linked to the right for detailed instructions.
The assets seen in the screenshots above are not included with the source code.
This project implements a very simplified version of learned ragdoll locomotion. The ragdoll is constructed of various separate rigid bodies, connected with hinge constraints on elbows and knees, and cone twist constraints on shoulders, hips, neck etc. Currently the ragdoll only learns to stand upright, by controlling forces applied to all these constraints. It can withstand minor forces.
The neural network has a very simple structure.
It only features two fully connected layers with a tanh activation.
It is trained using the Proximal Policy Optimization Algorithm (PPO), implemented in stable-baselines3
.
The training is implemented in PyTorch, so you'll need to install Python 3.x and some packages.
I am using Miniconda, but the steps below should work fine with just Python (replace conda
calls with pip
).
To set up the enviroment, in the Anaconda Powershell execute the following commands (depending on installation, maybe you'll need to start as administrator):
conda create --name learning
conda activate learning
conda install pytorch cpuonly -c pytorch
(I'm training on the CPU, but feel free to experiment with training on CUDA)pip install stable-baselines3
To start the training:
python ./learning/learn_locomotion.py
start_from_pretrained
inside learning/learn_locomotion.py to True
.I didn't feel like linking against the huge libtorch
C++ library for inference of such a simple network, so I wrote the inference myself.
Thus, after learning, execute the following command to export the layer weights and biases from Python to a text file: python ./learning/convert_model_to_c++.py
.
Then rebuild the C++ code.
The weights then get compiled automatically into the C++ executable.