H2buster Versions Save

A threaded, recursive, web directory brute-force scanner over HTTP/2.

v0.4d

4 years ago
  • Added an option to perform wildcard response detection (-wc). This way, if a server replies with codes different than 404 for resources that are not found (for example with a redirection), those can be detected as well.
  • The input target is first checked for redirections. If a redirect is detected, the user is asked whether to update the target to the redirection location or not.
  • Changed the default number of connections to the number of cores in the target system multiplied by 2.
  • Improved cross-compatibility for Unix-like systems when spawning processes.
  • Entries found that are identified as directories are now indicated as such in the output.
  • Improved error and interrupt handling.
  • Minor performance enhancements.

v0.4c

4 years ago
  • Added an option to change the HTTP request method (-m). Valid values are HEAD (default) and GET.
  • Added an option to display the response size (-l). This overrides the request method to GET.
  • robots.txt scan output now more verbose.
  • Fixed the case where visiting /robots.txt gives a redirection (when using -rb).
  • Minor performance improvements.

v0.4b

4 years ago
  • Several performance improvements.
  • Bug fixes in the robots.txt parser.
  • Sitemaps in robots.txt are now reported for manual inspection if found (they are not parsed).
  • Added an option (-vr) to verify TLS certificates (otherwise they are not checked).
  • Duplicated entries are properly filtered, now taking into account file extensions.
  • Status code display made cleaner.

v0.4a

4 years ago
  • Updated most strings to f-strings. This makes Python 3.6 a requirement.
  • Added the option to scan for the robots.txt file (-rb):
    • If found, the user is prompted about whether to use its information or not.
    • The user can either retrieve all entries in the file, just the allowed ones based on our own User Agent, or completely ignore the file.
    • HOWEVER, dictionary entries ARE NOT CHECKED against the robots.txt rules. Use your wordlist at your own risk. I might add the option to only use dictionary entries if they are allowed as a command line option in the future.
    • The information obtained from this file can be used in a smarter way. For now all the directories found are checked in their respective recursive depth. The entry /a/b/c will result in checking /a in the first iteration. If /a is found, /a/b will be searched in the next recursive iteration. Repeat this process for /a/b/c.
  • Reset HTTP/2 streams:
    • More information is given about the process which handles that stream.
    • Increased sleep time for the thread handling that reset.
  • Removed an unnecessary include. The rest of the includes are now tidier.
  • Duplicated entries in the input wordlist are just requested once.
  • Increased modularity by moving parser algorithms to external classes.
  • Removed the enable_push parameter for a call to the underlying hyper library - some versions don't seem to accept it.
  • Changed the way time is benchmarked. Now it represents how much seconds the actual scan took (as opposed to the time of option parsing & checking + the scan).

v0.3f

4 years ago
  • Added an option to ignore specific response codes (-b) by providing a list of codes separated by a vertical bar (|). Default is 404.

v0.3e-2

4 years ago
  • Improved error handling:
    • Now processes exit gracefully when things go wrong in the middle of a scan instead of hanging.
    • Keyboard interrupt is now less ugly.
  • Changed default connections (-c) to 4. This seems to yield a performance improvement in most cases.
  • Changed --help text to be tidier.
  • Changed line endings to UNIX-style (in case you were trying to run as ./h2buster.py).

v0.3e-1

4 years ago
  • Improved error handling for RFC non-compliant HTTP/2 servers.

v0.3e

4 years ago
  • A list of headers can be given to be sent for each request with -hd with the format -hd 'header->value[|header->value|header->value...]'. For example: -hd 'user-agent->Mozilla/5.0|accept-encoding->gzip, deflate, br'.
  • Extensions are now separated by a vertical bar too (|) for consistency (e.g. -x '.php|.js|blank|/').
  • The server header of the first response is now displayed at the beginning of the scan (if there is one).

v0.3d-1

5 years ago
  • Improved error handling for reset connections, HTTP/1-only targets, targets that do not exist and TLS errors.

v0.3d

5 years ago
  • A list of extensions can be given to be scanned, separated by a semicolon, with -x. For example, -x '.php;.js;blank;/' will check for .php, .js, blank and / file endings. Note that the blank keyword is used to signify no file ending.
  • Improved target parsing (-u).
  • Added feedback on stdout to see current entry being scanned (only on Linux and OS X).
  • Changed default threads (-t) from 15 to 20.
  • Improved color printing performance. The program should run smoother on both UNIX-based and Windows.
  • Other very slight performance improvements.