Cross-platform command-line AV1 / VP9 / HEVC / H264 encoding framework with per scene quality encoding
Full Changelog: https://github.com/master-of-zen/Av1an/compare/0.4.1...0.4.2
--photon-noise 0
is set by @shssoichiro in https://github.com/master-of-zen/Av1an/pull/599
-c y4m
instead of deprecated -y
in vspipe by @shssoichiro in https://github.com/master-of-zen/Av1an/pull/657
Full Changelog: https://github.com/master-of-zen/Av1an/compare/0.3.1...0.4.0-release
Full Changelog: https://github.com/master-of-zen/Av1an/compare/0.3.0...0.3.1
The provided binary in this release is dynamically linked to both the vapoursynth and ffmpeg libraries. Make sure the latest version of vapoursynth is installed, and that the ffmpeg DLLs are available. The ffmpeg DLLs can be downloaded from here: https://www.gyan.dev/ffmpeg/builds/packages/ffmpeg-4.4.1-full_build-shared.7z. Extract the files, and place the DLLs (which are in the bin directory once extracted) in the same directory as the downloaded av1an.exe
--verbose
)Full Changelog: https://github.com/master-of-zen/Av1an/compare/0.2.0...0.3.0
The provided binary in this release is dynamically linked to both the vapoursynth and ffmpeg libraries. Make sure the latest version of vapoursynth is installed, and that the ffmpeg DLLs are available. The ffmpeg DLLs can be downloaded from here: https://www.gyan.dev/ffmpeg/builds/ffmpeg-release-full-shared.7z. Extract the files, and place the DLLs (which are in the bin directory once extracted) in the same directory as the downloaded av1an.exe
The first and most important change is that Av1an was completely rewritten in Rust, which improved the stability, performance, and maintainability of the project, which allows us to leverage tools and features that would simply not be possible with Python. Imagine not having syntax\typing errors on runtime :exploding_head:
Now, Av1an uses parts of the Rav1e encoder for scene change detection.
Lots of QOL improvements all around: faster encoding, static versions of av1an, faster target-quality, better and faster scene-detection, lower RAM usage, easier build process, etc.
There is just too much to cover :D
The development, growth, and success of Av1an couldn't be possible without contributors that care and develop Av1an :kissing_heart: :heart: Those people are in descending order, but not significance:
Av1an now requires Vapoursynth Release Page
parcatStatic
from VVC repository.
The encoding requires compiled encoder, bitstream concatenator, config_file from the VVC repository.
EncoderAppStatic
and parcatStatic
must be named vvc_encoder
, vvc_concat
, and placed in the directory from where they can be reachable, the same directory or somewhere in PATH. Place the chosen config file for the encoder in encoding folder.
Encoder set to VVC by -enc vvc
Config file passed to Av1an by --vvc_conf CONFIG_FILE
Required video encoding parameters:
-wdt X
- video width
-hgt X
- video height
-fr X
- framerate
-q X
- quantizer
Example: -v " -wdt 640 -hgt 360 -fr 23.98 -q 30 "
After encode is done output file with extension .h266
will created
It can be decoded to .yuv with VVC compliled decoder :
DecoderAppStatic -d 8 -b encoded.h266 -o output.yuv
Keep in mind that encoding time is excessive
min_cq
max_cq
changed to min_q
,max_q
min_q
max_q
are set based on encoder.svt_av1: 1
rav1e: 1
aom: 2
vpx: 2
x265: 1
vvc: 1
Also, default encoding settings changed accordingly
There has been ~300 commits after the last release. Lots of changes, let's keep it short.
-enc x265
for encoder, --vmaf_target NUM
as usual5 probes now enough to cover over extreme ranges
Usage: --split_method aom_keyframes
aom_keyframes use the first pass of aomenc for determining where keyframes will be placed by the encoder, and using this information for splitting, resulting in 0 loss of encoding efficiency by segmenting for this encoder.
Encoder error will now be printed to the terminal
The "Target VMAF" feature has a really simple goal, instead of guessing what the CQ value of your encoder will give you in regards to the resulting quality, we set the VMAF score we want to achieve and let the algorithm get the closest CQ value that will result in that score, for each split segment. Which simultaneously achieve 3 things, if compared to usual, single value CQ encode.
From my testing, result size can be 50-80% of compared to usual encode, with great visual quality.
Plot contains VMAF score for each individual frame.
Plotting after encode will be performed if flag --vmaf
set, or --vmaf_target
is used.
The plot legend display Mean Average, Lower 1, 25, 75 percentile, which in combination with plot should be insightful enough for judging the quality of the encode instead of a single vmaf value for the whole video.
This is a plot of the encode that was using Target VMAF 96 (don't mind nan in mean, it's fixed at time of this post, but i don't want to reencode:) )
The 1 percentile you can see on this plot shows the likely VMAF score of complex scenes, which usually involves a lot of movement, changes in perspective, zooming in and out and change in video context on every frame.
To try simply run your default constant quality encoding with --vmaf_target N
, where N is VMAF score you try to achieve, i suggest something in 90-96 area.
--vmaf_target
sets the VMAF score that Av1an will aim for.
--min_cq
- sets the lower boundary for the VMAF probes and limit the minimum cq value at which the video can be encoded. Default is 25.
--max_cq
sets the upper boundary for the VMAF probes and limit the maximum cq value at which the video can be encoded. Default is 50.
--vmaf_steps
- sets the number of probes that are used to get the best CQ value for the target VMAF. Default value is 4. If min-cq
, max_cq
are changed that distance between them increase - make sure to set steps that there is a probe for every ~5 CQ of distance
For more information about previous target VMAF refer to 1.8 release. This is evolution upon that method.
Also need to mention that Target VMAF currently only for reference AV1 encoder
VMAF plotting is available as separate package, and only needs VMAF's or ffmpeg's libvmaf result xml file to work. GITHUB_REPO
-xs
, --extra_splits
will add cuts for every N frames on splits that are longer than N, and spread cuts evenly, intelligently.
This option help split big scenes or files that are long single scene, like camera recording, into splits that paralleled better.
--extra_splits 400
For a split that goes from frame 14000 to 14900, the distance between cuts is 900, 900/400= 2.25 which rounds to 2.
Av1an will try to find 2 key frames that are closer to frame 14300 and 14600 and place the splits there.
Another split from 0 to 500, Av1an will find key frame closer to 250 and place cut there.
Splits that have less than 400 frames are not affected.