OTT/ABR streaming encoder (H264/HEVC) and packager for DASH and HLS
OTT live streaming encoder and packager (or independent packaging) supporting ABR streaming for HLS and DASH
This application is intended to serve as a reliable and scalable OTT streaming repackager (with optional transcoding) to deliver content as part of an overall media streaming platform. There are two key variations of OTT streaming technologies that this software accommodates:
HLS (HTTP Live Streaming) - Transport Stream HLS and Fragmented MP4 HLS (CMAF style)
DASH (Dynamic Adaptive Streaming over HTTP) - Fragmented MP4
With this application, you can ingest live MPEG2 transport streams carried over UDP (Multicast or Unicast) for transcoding and/or repackaging into HTTP Live Streaming (HLS) (both TS and MP4) and DASH output container formats. The application can optionally transcode or just simply repackage. If you are repackaging then the source streams need to be formatted as MPEG2 transport containing H264/HEVC and AAC audio, however if you are transcoding then you can ingest a MPEG2 transport stream containing other formats as well. In the transcode mode, we will also process the SCTE35 messages (CUE-OUT/CUE-IN) into the HLS manifests.
There are two ways to use this application. The first and simplest method is use to the command version of the application. You can quickly clone the repository, compile and easily start streaming. The Quickstart for the web application is further down in the README and is a bit more involved to get setup and running, but provides a scriptable API as well as a nice clean interface with thumbnails and other status information in the transcoding mode. The web application is still in the early stages and I will continually be adding features for managing these types of streaming services.
I would also appreciate any funding support, even if it is a one time donation. I only work on this project in my spare time. If there are specific features you would like to see, a funding donation goes a long way to make it happen. I can also offer support services for deployment to address any devops type of issues, troubleshoot hardware (or software issues), or just offer general advice.
If something doesn't work here for you, then please post a bug in GitHub. I know this process can be a pain in the arse to get going especially given everyone has a different environment, so please be patient in following the instructions. If you do have an issue, I am more than willing to help out. I will soon be deprecating the CPU based transcoding since it requires a lot of my time to maintain both code pathways.
Please follow the directions below *very* closely:
cannonbeach@insanitywave:$ sudo apt install git
cannonbeach@insanitywave:$ sudo apt install build-essential
cannonbeach@insanitywave:$ sudo apt install libz-dev
cannonbeach@insanitywave:$ git clone https://github.com/cannonbeach/ott-packager.git
cannonbeach@insanitywave:$ cd ott-packager
*IMPORTANT* *IMPORTANT* *IMPORTANT* *IMPORTANT* *IMPORTANT*
(VERY IMPORTANT: Please advise - if you are planning to run on a NVIDIA GPU system, you need to make sure that prior to running setuptranscode.sh that the cudainclude and cudalib directories are set correctly in the script, otherwise it will fail to setup properly). Please also make sure that MakefileTranscode also has the correct paths.
The Dockerfile is currently setup to use nvidia image for CUDA 12.1.1. You can get the combo CUDA+Driver from here:
https://developer.nvidia.com/cuda-12-1-1-download-archive
You will download th file: cuda_12.1.1_530.30.02_linux.run
And it will install the CUDA 12.1.1 and the NVIDIA Driver
You can get updated NVIDIA patch here to enable more encoding sessions on consumer hardware. This works extremely well and you should install it for maximum performance!!
https://github.com/keylase/nvidia-patch
If you need a different version, you need to update the Dockerfile_Transcode to the correct base image and update all of the other corresponding paths. Email if you need help. I have limited access to a lot of GPU hardware and I am looking to get my hands on something that supports AV1. You can donate or provide me with cloud credits or even some other arrangement.
*IMPORTANT* *IMPORTANT* *IMPORTANT* *IMPORTANT* *IMPORTANT*
cannonbeach@insanitywave:$ chmod +x setuptranscode.sh
cannonbeach@insanitywave:$ ./setuptranscode.sh
*IMPORTANT* *IMPORTANT* *IMPORTANT* *IMPORTANT* *IMPORTANT*
(VERY IMPORTANT: If you are not compiling on an NVIDIA GPU system, when you get to the x265 setup (which is towards the end of the script execution, please set ENABLE_SHARED to OFF and set ENABLE_ASSEMLBY to ON, then hit the letter 'c' for configuration and then hit 'g' for generate and exit)
*IMPORTANT* *IMPORTANT* *IMPORTANT* *IMPORTANT* *IMPORTANT*
cannonbeach@insanitywave:$ chmod +x setupsystem.sh
cannonbeach@insanitywave:$ ./setupsystem.sh
cannonbeach@insanitywave:$ ./mkpkg.sh
cannonbeach@insanitywave:$ sudo dpkg -i fillet-1.1.deb
Then point web browser to port 8080, for example: http://10.0.0.200:8080 and the web application will come up. If for some reason, it does not come up, you need to review the steps above to make sure you followed everything correctly.
You will notice that the Apache web server was also installed. It allows you to easily serve content directly off the same system. The content will be available from the directories that you specified in your configurations.
The software install guide here is for Ubuntu 20.04 server only, however, you can run this on older/newer versions of Ubuntu as well as in Docker containers for AWS/Google cloud based deployments. I do not maintain a CentOS installation guide.
There are now two versions of the application that get built. The transcode/package (fillet_transcode) and the independent packager (fillet_repackage).
The fillet application must be run as a user with *root* privileges, otherwise it will *not* work.
usage: fillet_repackage [options]
INPUT PACKAGING OPTIONS (AUDIO AND VIDEO CAN BE SEPARATE STREAMS)
--vsources [NUMBER OF VIDEO SOURCES - TO PACKAGE ABR SOURCES: MUST BE >= 1 && <= 10]
--asources [NUMBER OF AUDIO SOURCES - TO PACKAGE ABR SOURCES: MUST BE >= 1 && <= 10]
INPUT OPTIONS (when --type stream)
--vip [IP:PORT,IP:PORT,etc.] (THIS MUST MATCH NUMBER OF VIDEO SOURCES)
--aip [IP:PORT,IP:PORT,etc.] (THIS MUST MATCH NUMBER OF AUDIO SOURCES)
--interface [SOURCE INTERFACE - lo,eth0,eth1,eth2,eth3]
If multicast, make sure route is in place (route add -net 224.0.0.0 netmask 240.0.0.0 interface)
OUTPUT PACKAGING OPTIONS
--window [WINDOW IN SEGMENTS FOR MANIFEST]
--segment [SEGMENT LENGTH IN SECONDS]
--manifest [MANIFEST DIRECTORY "/var/www/hls/"]
--identity [RUNTIME IDENTITY - any number, but must be unique across multiple instances of fillet]
--hls [ENABLE TRADITIONAL HLS TRANSPORT STREAM OUTPUT - NO ARGUMENT REQUIRED]
--dash [ENABLE FRAGMENTED MP4 STREAM OUTPUT (INCLUDES DASH+HLS FMP4) - NO ARGUMENT REQUIRED]
--manifest-dash [NAME OF THE DASH MANIFEST FILE - default: masterdash.mpd]
--manifest-hls [NAME OF THE HLS MANIFEST FILE - default: master.m3u8]
--manifest-fmp4 [NAME OF THE fMP4/CMAF MANIFEST FILE - default: masterfmp4.m3u8]
--webvtt [ENABLE WEBVTT CAPTION OUTPUT]
--cdnusername [USERNAME FOR WEBDAV ACCOUNT]
--cdnpassword [PASSWORD FOR WEBDAV ACCOUNT]
--cdnserver [HTTP(S) URL FOR WEBDAV SERVER]
PACKAGING AND TRANSCODING OPTIONS CAN BE COMBINED
And for the transcode/package, usage is follows:
usage: fillet_transcode [options]
INPUT TRANSCODE OPTIONS (AUDIO AND VIDEO MUST BE ON SAME TRANSPORT STREAM)
--sources [NUMBER OF SOURCES - TO PACKAGE ABR SOURCES: MUST BE >= 1 && <= 10]
INPUT OPTIONS (when --type stream)
--ip [IP:PORT,IP:PORT,etc.] (THIS MUST MATCH NUMBER OF SOURCES)
--interface [SOURCE INTERFACE - lo,eth0,eth1,eth2,eth3]
If multicast, make sure route is in place (route add -net 224.0.0.0 netmask 240.0.0.0 interface)
OUTPUT PACKAGING OPTIONS
--window [WINDOW IN SEGMENTS FOR MANIFEST]
--segment [SEGMENT LENGTH IN SECONDS]
--manifest [MANIFEST DIRECTORY "/var/www/hls/"]
--identity [RUNTIME IDENTITY - any number, but must be unique across multiple instances of fillet]
--hls [ENABLE TRADITIONAL HLS TRANSPORT STREAM OUTPUT - NO ARGUMENT REQUIRED]
--dash [ENABLE FRAGMENTED MP4 STREAM OUTPUT (INCLUDES DASH+HLS FMP4) - NO ARGUMENT REQUIRED]
--manifest-dash [NAME OF THE DASH MANIFEST FILE - default: masterdash.mpd]
--manifest-hls [NAME OF THE HLS MANIFEST FILE - default: master.m3u8]
--manifest-fmp4 [NAME OF THE fMP4/CMAF MANIFEST FILE - default: masterfmp4.m3u8]
--webvtt [ENABLE WEBVTT CAPTION OUTPUT]
--cdnusername [USERNAME FOR WEBDAV ACCOUNT]
--cdnpassword [PASSWORD FOR WEBDAV ACCOUNT]
--cdnserver [HTTP(S) URL FOR WEBDAV SERVER]
OUTPUT TRANSCODE OPTIONS
--transcode [ENABLE TRANSCODER AND NOT JUST PACKAGING]
--gpu [GPU NUMBER TO USE FOR TRANSCODING - defaults to 0 if GPU encoding is enabled]
--select [PICK A STREAM FROM AN MPTS- INDEX IS BASED ON PMT INDEX - defaults to 0]
--outputs [NUMBER OF OUTPUT LADDER BITRATE PROFILES TO BE TRANSCODED]
--vcodec [VIDEO CODEC - needs to be hevc or h264]
--resolutions [OUTPUT RESOLUTIONS - formatted as: 320x240,640x360,960x540,1280x720]
--vrate [VIDEO BITRATES IN KBPS - formatted as: 800,1250,2500,500]
--acodec [AUDIO CODEC - needs to be aac, ac3 or pass]
--arate [AUDIO BITRATES IN KBPS - formatted as: 128,96]
--aspect [FORCE THE ASPECT RATIO - needs to be 16:9, 4:3, or other]
--scte35 [PASSTHROUGH SCTE35 TO MANIFEST (for HLS packaging)]
--stereo [FORCE ALL AUDIO OUTPUTS TO STEREO- will downmix if source is 5.1 or upmix if source is 1.0]
--quality [VIDEO ENCODING QUALITY LEVEL 0-3 (0-BASIC,1-STREAMING,2-BROADCAST,3-PROFESSIONAL)
LOADING WILL AFFECT CHANNEL DENSITY-SOME PLATFORMS MAY NOT RUN HIGHER QUALITY REAL-TIME
H.264 SPECIFIC OPTIONS (valid when --vcodec is h264)
--profile [H264 ENCODING PROFILE - needs to be base,main or high]
Simple Repackaging Command Line Example Usage:
cannonbeach@insanitywave:$ sudo ./fillet_repackage --vsources 2 --vip 0.0.0.0:20000,0.0.0.0:20001 --asources 2 --aip 0.0.0.0:20002,0.0.0.0:20003 --interface eno1 --window 10 --segment 2 --hls --manifest /var/www/html/hls
This will write the manifests into the /var/www/html/hls directory (this is a common Apache directory).
cannonbeach@insanitywave:$ sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0
and so on...
After you've made those changes, please run the following for the changes to take effect
sudo sysctl -p
The packager has several different optional modes:
This is a budget friendly packaging/transcoding solution with the expectation of it being simple to setup, use and deploy. The solution is very flexible and even allows you to run several instances of the application with different parameters and output stream combinations (i.e., have a mobile stream set and a set top box stream set). If you do run multiple instances using the same source content, you will want to receive the streams from a multicast source instead of unicast. The simplicity of the deployment model also provides a means for fault tolerant setups.
A key value add to this packager is that source discontinuities are handled quite well (in standard packaging mode as well as the transcoding modes). The manifests are setup to be continuous even in periods of discontinuity such that the player experiences as minimal of an interruption as possible. The manifest does not start out in a clean state unless you remove the local cache files during the fast restart (located in /var/tmp/hlsmux...). This applies to both HLS (handled by discontinuity tags) and DASH outputs (handled by clever timeline stitching the manifest). Many of the other packagers available on the market did not handle discontinuties well and so I wanted to raise the bar with regards to handling signal interruptions (we don't like them, but yes they happen and the better you handle them the happier your customers will be). If the source signal goes away in transcoding mode, the software will backfill by repeating video frames and filling the audio with silence. This is great if you have a short signal interruption and need a way to maintain your output.
Another differentiator (which is a bit more common practice now) is that the segments are written out to separate audio and video files instead of a single multiplexed output file containing both audio and video. This provides additional degrees of freedom when selecting different audio and video streams for playback (it does make testing a bit more difficult though).
cannonbeach@insanitywave:$ ./fillet_transcode --sources 1 --ip 0.0.0.0:5000 --interface eth0 --window 20 --segment 2 --identity 1000 --hls --dash --transcode --outputs 2 --vcodec h264 --resolutions 320x240,960x540 --manifest /var/www/html/hls --vrate 500,2500 --acodec aac --arate 128 --aspect 16:9 --scte35 --quality 0 --profile base --stereo
cannonbeach@insanitywave:$ ./fillet_transcode --sources 1 --ip 0.0.0.0:5000 --interface eth0 --window 20 --segment 2 --identity 1000 --hls --dash --transcode --outputs 2 --vcodec hevc --resolutions 320x240,960x540 --manifest /var/www/html/hls --vrate 500,1250 --acodec aac --arate 128 --aspect 16:9 --quality 0 --stereo
Get Detailed Service Status:
http://127.0.0.1:8080/api/v1/get_service_status/##
Get Service Count:
http://127.0.0.1:8080/api/v1/get_service_count
Get Service List (A list of the current services and high level status but not a lot of details):
http://127.0.0.1:8080/api/v1/list_services
Get System Information (CPU Load, Memory, Temperature, etc.):
http://127.0.0.1:8080/api/v1/system_information
(see Wiki for a use case for the transcoding API)
The application will also POST event messages to a third party client (or log) for the following events. The Winston logging system is being used now within the NodeJS framework, so it is quite easy to extend this to meet your own needs. The default log will be /var/log/eventlog.log. It is recommended that you add it to the system logrotate.
- Start Service (Container Start)
- Stop Service (Container Stop)
- No Source Signal
- Docker Container Restart
- SCTE35 begin/end
- Segment Published Upload
- Segment Published Failed Upload
- High CPU Usage
- Low Drive Space
- Service Added
- Service Removed
- Silence Inserted
- Frame Dropped
- Frame Repeated
- High Source Errors Over Period of Time (threshold TBD/ms)
And instead of building a full dashboard monitoring system, I've been looking at other open source services to have a nice interface for tracking the health of the systems and generated streams.
There is nothing more frustrating when you clone an open source project off of GitHub and can't get it to compile or work! I do my best to make sure everything works within the context of the resources I have available to me. I am not doing nightly builds and do not have a complicated autotest framework. I work on this in my spare time so it's possible something may slip by. If something doesn't work, then please reach out to me or post a bug in the "Issues" section. I know the instructions and setup scripts are a bit extensive and detailed, but if you follow them line by line they do work.
I am currently using Winston to log messages back through the NodeJS interace. Here is a sample of the logging information provided which is currently logged to /var/log/eventlog.log on the "Host" system.
Some troubleshooting tips:
cd /var/app
sudo node server.js
sudo tcpdump -n udp -i eth0
this will quickly tell if you are receiving content
or
ffprobe udp://@:5000
that'll quickly identify if something is on that port, etc.
Check inside /var/tmp/status for .lock files. If the Docker container got out of sync from the webapp, then you may need to manaully delete the .lock file for the specific configuration you are having problems with.
The configuration files are also stored in /var/tmp/configs.
You can change config settings manually by editing the .json files in /var/tmp/configs
I suggest you be resourceful and try to debug things. These types of systems are not always easy to setup.
While running the webapp, you can do a "tail -f /var/log/eventlog.log". You should also add the eventlog.log to the logrotate.conf on your Ubuntu system to prevent your drive from filling up.
{"accesstime":"2024-03-27T14:12:36Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/video0fmp4.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:36Z","host":"tapeworm","id":1708558258,"level":"info","message":"segment written","segment":"/var/www/html/hls/video_stream1_36.ts","status":"success"}
{"accesstime":"2024-03-27T14:12:36Z","host":"tapeworm","id":1708558258,"level":"info","message":"segment written","segment":"/var/www/html/hls/video1/segment163198501845.mp4","status":"success"}
{"accesstime":"2024-03-27T14:12:36Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/video1.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:36Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/video1fmp4.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:36Z","host":"tapeworm","id":1708558258,"level":"info","message":"segment written","segment":"/var/www/html/hls/audio_stream0_substream_0_36.ts","status":"success"}
{"accesstime":"2024-03-27T14:12:36Z","host":"tapeworm","id":1708558258,"level":"info","message":"segment written","segment":"/var/www/html/hls/audio0_substream0/segment163198503286.mp4","status":"success"}
{"accesstime":"2024-03-27T14:12:36Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/audio0_substream0.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:36Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/audio0_substream0_fmp4.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:36Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/master.mpd","status":"success"}
{"accesstime":"2024-03-27T14:12:42Z","host":"tapeworm","id":1708558258,"level":"info","message":"segment written","segment":"/var/www/html/hls/video_stream0_37.ts","status":"success"}
{"accesstime":"2024-03-27T14:12:42Z","host":"tapeworm","id":1708558258,"level":"info","message":"segment written","segment":"/var/www/html/hls/video0/segment163199042385.mp4","status":"success"}
{"accesstime":"2024-03-27T14:12:42Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/video0.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:42Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/video0fmp4.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:43Z","host":"tapeworm","id":1708558258,"level":"info","message":"segment written","segment":"/var/www/html/hls/video_stream1_37.ts","status":"success"}
{"accesstime":"2024-03-27T14:12:43Z","host":"tapeworm","id":1708558258,"level":"info","message":"segment written","segment":"/var/www/html/hls/video1/segment163199042385.mp4","status":"success"}
{"accesstime":"2024-03-27T14:12:43Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/video1.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:43Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/video1fmp4.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:43Z","host":"tapeworm","id":1708558258,"level":"info","message":"segment written","segment":"/var/www/html/hls/audio_stream0_substream_0_37.ts","status":"success"}
{"accesstime":"2024-03-27T14:12:43Z","host":"tapeworm","id":1708558258,"level":"info","message":"segment written","segment":"/var/www/html/hls/audio0_substream0/segment163199042806.mp4","status":"success"}
{"accesstime":"2024-03-27T14:12:43Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/audio0_substream0.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:43Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/audio0_substream0_fmp4.m3u8","status":"success"}
{"accesstime":"2024-03-27T14:12:43Z","host":"tapeworm","id":1708558258,"level":"info","message":"manifest written","segment":"/var/www/html/hls/master.mpd","status":"success"}
/var/log/eventlog.log
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
copytruncate
postrotate
/usr/lib/rsyslog/rsyslog-rotate
endscript
}
(03/26/24) SCTE35
Ok, I've been scrapping my way through some remaining SCTE35 issues and have finally gotten things resolved. It is working well for HLS outputs, so if you find something that is not working correctly, you all need to let me know. There are also some new log messages that go into /var/log/eventlog.log. Have fun with it.
{"accesstime":"2024-03-26T19:23:27Z","host":"tapeworm","id":1708558258,"level":"info","message":"SCTE35 Cue Out of Network Detected (OUT), anchor_time=6158078666, splice_time=6158737825, duration=5400000","status":"success"}
{"accesstime":"2024-03-26T19:23:29Z","host":"tapeworm","id":1708558258,"level":"info","message":"Inserting IDR video frame for SCTE35 CUE OUT splice point, encoder=0","status":"success"}
{"accesstime":"2024-03-26T19:23:29Z","host":"tapeworm","id":1708558258,"level":"info","message":"Inserting IDR video frame for SCTE35 CUE OUT splice point, encoder=1","status":"success"}
{"accesstime":"2024-03-26T19:24:29Z","host":"tapeworm","id":1708558258,"level":"info","message":"SCTE35 Splice Duration Finished, anchor_time=6164138720","status":"success"}
{"accesstime":"2024-03-26T19:24:29Z","host":"tapeworm","id":1708558258,"level":"info","message":"SCTE35 Cue In Network Detected (IN), anchor_time=6164140222, splice_time=6164137219","status":"success"}
{"accesstime":"2024-03-26T19:24:29Z","host":"tapeworm","id":1708558258,"level":"info","message":"Inserting IDR video frame for SCTE35 CUE IN splice point, encoder=0","status":"success"}
{"accesstime":"2024-03-26T19:24:29Z","host":"tapeworm","id":1708558258,"level":"info","message":"Inserting IDR video frame for SCTE35 CUE IN splice point, encoder=1","status":"success"}
(02/22/24) Small update
Updated to a newer Cuda version in the Dockerfile since dockerhub deprecated the image I was previously using
(01/02/24) Happy New Year!
If anyone needs SRT support on the input side of the ott-packager, please use my other project opensrthub. https://github.com/cannonbeach/opensrthub.git And as usual, if anyone needs something, send me an email.
(09/26/23) Ok, ok, ok....the weather is getting colder and I am not ready for winter
I figured it was time to come back to this project and do some things. I added webapp support for packaging, so you can now add a packaging service or a transcoding service using the webapp. I also updated NodeJS from 12 to 18, and made the transcode a separate compile from the repackager. You must follow the new set of instructions to get everything up and running and you no longer have a choice to build one or the other (at least with the scripts I am providing). Reach out if there is an issue or a question. I'd love to hear from you.
(03/15/22) It's been awhile....
I've been off doing other projects and have been meaning to come back to this project and give it some much needed attention! I pushed up some small timestamp fixes along with initial support for nvidia based gpu encoding. I did not fully update the quickstart instructions above but will do that in the coming days. I have lots of build combinations to test and need to setup a clean system to make sure things are working as intended. Send me an email if you get stuck in the meantime.
(11/10/20) Small update
(07/31/20) It's been awhile....
(10/25/19) It's almost Halloween! Trick r' Treat!
(07/25/19) Short update
(04/29/19) Web application development
(04/15/19) Another short update
(03/14/19) Short update on things
(03/04/19) Project is still in active development. I am still pushing for a v1.0 in the next couple of months. I pushed up a small update today to clean up a few minor issues:
(02/20/19) As I mentioned in earlier posts, the application is still in active development, but I am getting closer to a v1.0 release. This most recent update has included some significant transcoding feature improvements.
And finally, I am also thinking of putting together a "Pro" version to help me fund the development of this project. It'll be based on a reasonable yearly fee and provide access to an additional repository that contains a full NodeJS web interface, a more complete Docker integration, benchmarks, cloud deployment examples, deployment/installation scripts, priority support, fully documented API (along with scripts), SNMP traps, and active/passive failover support.
But for those of you that don't wish to take advantage of things like support, the source code for the core application will remain available in the existing repository.
I also plan to start adapting this current solution over to file version after the v1.0 has been finished and released.
(01/12/19) This application is still in active development and I am hoping to have an official v1.0 release in the next couple of months. I still need to tie up some loose ends on the packaging as well as complete the basic H.264 and HEVC transcoding modes. The remaining items will be tagged in the "Issues" section.
I do offer fee based consulting so please send me an email if you are interested in retaining me for any support issues or feature development. I have several support models available and can provide more details upon request. You can reach me at: [email protected]
See the WIKI page for more information:
https://github.com/cannonbeach/ott-packager/wiki