Spidy Versions Save

The simple, easy to use command line web crawler.

1.4.0

6 years ago

Much update!

  • Confirmed and added support for OS/X and Linux thanks to michellemorales and j-setiawan.
  • Updated documentation to the current state of things. Still work to be done there.
  • Removed 'bad file' functionality as it wasn't working as intended and wasn't important anyway. That's what error logs are for.
  • Resolving <base> tags to grab links that wouldn't have been recognized before. Thanks lxml!
  • Added an optional (on by default) check for file size. Won't download any files larger than 500 MB, assuming the site returns a Content-Length header.
  • Added Firefox (on Ubuntu) as an option for browser spoofing.

spidy.zip contains just crawler.py and config/, while the source code archives contain all files.

1.3

6 years ago

Final 1.3.0 release. Added error handling back in - no changes needed.

Optimized all file creation and loading. Everything is now saved with UTF-8 encoding, allowing for foreign characters and EMOJI in pages.

1.3-alpha

6 years ago

Optimized all file creation and loading. Everything is now saved with UTF-8 encoding, allowing for foreign characters and EMOJI in pages.

In Alpha as the error-handling system is being slightly redesigned. Still functional however!

1.2

6 years ago

Added domain restrictions. Crawling can now be limited to a certain domain, such as wsj.com, https://www.wsj.com, or https://www.wsj.com/article. Can be set when entering configuration settings or in the config files. Also more bugfixes and MIME types because those are cool.

1.0

6 years ago

The first official release of spidy! A GUI is in the works, as well as many more awesome features.

spidy.zip contains only the files necessary to run the crawler, while the source code downloads contain all the things.