Using Skipfish for Numerical URL Brute-forcing

Suppose we have a web site that stores data files in a web accessible directory /data/ which is not indexable. And suppose the files are named /data/something_<timestamp>.txt. And we want to find as many data files present in this directory as possible. Further let's assume that the timestamp is in "yymmddhhmmss" format.

One obvious choice for this type of job is Burp Intruder. It can easily do this type of bruteforcing and help analyse the results. Unfortunately for large amount of requests it is not fast enough. In my testing it was able to show the performance of 24 requests per second for this type of test. If we want to test a day worth of timestamps (60*60*24=86400) it will take an hour.

Using ApacheBench to test how much performance I could get out of the site showed that it can go as fast as 200 requests per second - approximately 10 times faster than Burp. Thinking about high performance web scanner I though about skipfish

Skipfish currently does not really support this type of test. In fact "Password brute-force and numerical filename brute-force probes" are in the Feature Wishlist. So getting it to do what I want took some wrestling. However, the result was quite good: Skipfish was able to do up to 400 requests a second and finished brute-forcing 86400 file names in less than 7 minutes, making about 170000 HTTP requests (about twice as many as the file names).

So here is how to make Skipfish brute-force numeric file names. First we need to create a dictionary of possible file name in Skipfish's wordlist format. This can be done with perl:

perl -e 'for $h (0..23) { for $m (0..59) { for $s (0..59) \
{ $w = sprintf("something_110202%02d%02d%02d.txt",$h,$m,$s) ; \
print "w 1 1 1 $w\n"; }}}' > mywordlist.wl

This generates potential file names for today - 02/02/2011, hence 110202 in the file name.

Now make skipfish use this wordlist and prevent it from crawling the web site and trying to fuzz anything at all:

./skipfish -W mywordlist.wl -I http://www.example.com/data/ -o tt -O -P -L -V -Y -d 5 -c 86400 http://www.example.com/data/

The options used have the following meaning:

-I http://www.example.com/data/ - only test URLs under this directory
-o tt - write output to "tt" directory
-O - do not submit any forms
-P - do not parse HTML, etc, to find new links
-L - do not auto-learn new keywords for the site
-V - do not update wordlist based on scan results
-Y - do not fuzz extensions in directory brute-force
-d 5 - maximum crawl tree depth. Not sure if this is necessary.
-c 86400 - maximum children to index per node. This one seems to be important - 
tells skipfish not to give up after testing default (512) filenames per directory, 
but try our whole list

Update: Michal Zalewski, the author of skipfish, says regarding "-c 86400" option:

"This shouldn't be necessary, unless you are running into some sort of
a quirk / bug: the setting limits the maximum number of discovered
subdirectories, not the maximum number of attempts; so, the default
value is exceeded only if more than 512 names are actually confirmed
to exist (non-404 responses)."

Comments

nice man, keep up the good work. can you make a tutorial for xss?
i know there a lot of them on the net. but I would like a clear picture.

Thanks.

XSS tutorial? You must be kidding. :)

why. did i say something wrong?
too easy? or what?

Well, actually it depends.

Basic cross-site scripting with alert(document.cookie) is beaten to death and there are ten million tutorials on the web.

If you are interested in more practical exploits, there is enough stuff to fill a book - like this one: XSS Attacks: Cross Site Scripting Exploits and Defense

ok thanks. but how about Cookie stuffing? :D

What about it? You want a tutorial? ;)

:)) how did you guess? YES :p

Absolutely not. I am a wise woman.

ok then. you rock! :)