A server to collect & archive websites that also supports video downloads
You can now filter by multiple domains, e.g. if hosted on http://yourserver:100
, you can now visit
http://yourserver:100/site/example.com+example2.com
to see all sites from example.com
and example2.com
.
See here on how to update your installation.
From #55:
/details/:id/index
route that redirects you to the main index page of the entrySee here on how to update your installation.
This release fixes a few bugs I noticed:
Other improvements:
See here on how to update your installation.
Add options for public access #41 (thanks TeoGoddet).
The new options allow_public_view
and allow_public_all
allow configuring if sites can be viewed without logging in and if logging in is required at all.
If you added an url like https://example.com/some%20file.pdf
, the server would download it an save it as some%20file.pdf
. This would break the link to the file so you couldn't access it directly.
This release fixes this issue by saving that file as some file.pdf
, which means that the link will work correctly.
The server should no longer throw an error if the content file doesn't exist, but treat it as if it contained an empty array. This file is now also created at startup so this only happens when you delete it while the server is running.
It now also checks the more generic error code ENOENT
instead of a specific one that only worked on windows.
See #39 for more details on the problem.
Page generation should now be faster as it generates HTML elements instead of a html string that needed to be parsed again. Depending on the number of elements on the page this was taking long.
This release fixes the following bugs:
The documentation has been expanded:
This release adds the following keyboard shortcuts:
ESC
to return to the main pageInstead of comparing strings when setting event listeners, the function now checks the current state in order to avoid unnecessary loops.
In short: This speeds up the frontend when a page is generated