Hacked! And I didn't like it - URLScan is Step Zero
My blog was down a few days ago. I've had downtime in the minutes over the last few years, but as far as I recall, it's never been down for any significant time. Keyvan noticed that a bunch of us were attacked. Phil Haack was also, ahem, haacked.
UPDATE: To be clear, I wasn't really hacked. I was "DoS'ed" or brought down for a little bit by a distributed denial of service attack that spiked my CPU. I'm advocating that you constrain the URLs that input that get to your application by either black-listing, or white-listing allowed content.
I host at ORCSWeb and have forever. We're in the process of making a lot of chances to my blog. I'm on an x64 machine (I've blogged about DasBlog on x64 before), but running in 32-bit AppPool We're moving my blog to a dedicated server, switching to x64, and we also were upgrading to UrlScan 3.0 which just had a Beta Release in June.
Anyway, in this crazy process, there was window of time where I didn't have UrlScan enabled on the machine. I mistakenly thought that the Ninjas wouldn't be able to catch me if I was on fire. In fact, not so.
Speaking of Ninjas, Wade Hilmo is a ninja at Microsoft who writes UrlScan.
There's a great IIS7 Request Filter for protecting against nasty attacks, but UrlScan Beta 3.0 still has the edge on the filter for the time being. Version 3.0 of UrlScan adds:
- Support for query string scanning, including an option to scan an unescaped version of the query string.
- Change notification for configuration (no more restarts for most settings.)
- UrlScan can be installed as a site filter. Different sites can have their own copy, with their own configuration.
- Escape sequences can be used in the configuration file to express CRLF, a semicolon (normally a comment delimiter) or unprintable characters in rules.
- Custom rules can be created to scan the URL, query string, a particular header, all headers or combination of these. The rules can be applied based on the type of file requested.
- Support for 64 bit IIS worker processes.
This release of UrlScan is a beta, but it's config file is backward compatible and there's a GoLive license. It's working great for me. However, to quote Wade:
"While they are effective against the current wave of automated attacks, they cannot protect against more directed attacks against a specific server."
This was a SQL Injection attack with URLs that looked like this (and some variations):
[08-11-2008 - 17:29:31] Client at 201.67.x.x: Query string length exceeded maximum allowed. Request will be rejected. Site Instance='13', QueryString= 'guid=0b93befc-3543-4bfc-ba8e-6cd340b6d9d3;DECLARE%%20@S%%20VARCHAR(4000);SET%%20@S=CAST(0x4445434C4152452...(incrediblyLONGQueryString)...220%%20AS%%20VARCHAR(4000));EXEC(@S);--', Raw URL='/blog/CommentView.aspx'
In this example, it's hitting CommentView.aspx and trying to add a bunch of T-SQL at the end, with the most evil part encoded inside a CAST() statement. It's a distributed attack with a bunch of (likely innocent) drones reaching out to be mean. In a few hour period, there were thousands of attacks for over 250 different IP addresses.
Fortunately DasBlog doesn't use a database at all, rather a bunch of XML files for storage. Unfortunately, the application was still trying to map these query strings to blog posts, and the result took my blog down.
There's really two main things to think about when dealing with user input, remembering that the URL is an input point for your application!
- Trust no input from the user.
- Constrain the input that reaches your application code as much as you can. Deny as early as possible (hardware, load-balancer, appliance, module, etc).
We need to tighten up DasBlog to more quickly reject URLs that are surely not requests for blog posts, but a tool like UrlScan allows me to easily reject obvious attacks in an way that is more efficient than letting my application code do it.
I would encourage you to take a moment and do a threat analysis on your own websites, and make sure that you ARE constraining input appropriately.
One thing to note, you can and will likely break things for a while with UrlScan, as it does constrain input and you might have valid URLs you aren't aware of. For example, UrlScan broke OpenID authentication for me as the query strings included dots, which UrlScan was denying and also the presence of the word "open" in the querystring. Other denials can happen because of keywords in the URL or length of the querystring. Be sure to test appropriately and watch the UrlScan logs of denials. You can set very blanket rules, or constrain by extension.
We always installed UrlScan on staging and production machines when I was in banking, and made them part of the testing and deployment process. In these times, having a filter installed like UrlScan is Step Zero. I will remember that in the future! Thanks to ORCSweb for answering my 2am emails and helping me fix it in near-real-time!
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
About Newsletter

My 
