Software and web application security

April 23, 2007

I18N input validation whitelist filter with System.Globalization and GetUnicodeCategory

Filed under: software security — chrisweber @ 10:48 pm

Maybe you’re building internationalized code and wondering how to build a whitelist filter that will support all the different character sets your planning to support. If you support more than ten, especially some of the larger east Asian sets, this might seem like an unwieldy or tricky process.
Well luckily it’s easier than most people would think. Building a good input validation filter can be simplified with .Net’s GetUnicodeCategory. But use the method from the System.Globalization namespace as the other one in System.Char looks like it may become the subordinate.

With GetUnicodeCategory you can simply build a whitelist supporting the character categories you want to allow. So get away from thinking you have to write a regEx filter and list out all the character ranges you want to allow in each character set, it’s much simpler than that!

The Unicode standard assigns ever character to one of about 31 categories. They make sense too, for example Other Control charactes (Cc) , Lowercase Letter (Ll), Uppercase Letter (Lu), Math Symbol (Sm). So for example you might want to only allow letters, numbers, and punctuation in your whitelist. This could be achieved with the following snippet:


char cUntrustedInput; // the untrusted user-input
UnicodeCategory cInputTest = CharUnicodeInfo.GetUnicodeCategory(cUntrustedInput);
if (cTestCategory == UnicodeCategory.LowercaseLetter ||
cTestCategory == UnicodeCategory.UppercaseLetter ||
cTestCategory == UnicodeCategory.DecimalDigitNumber ||
cTestCategory == UnicodeCategory.TitlecaseLetter ||
cTestCategory == UnicodeCategory.OtherLetter ||
cTestCategory == UnicodeCategory.NonSpacingMark ||
cTestCategory == UnicodeCategory.DashPunctuation ||
cTestCategory == UnicodeCategory.ConnectorPunctuation)
{
// character looks safe, continue
}
else
{
// character is not allowed, fail
}

Not too bad eh.

Advertisements

January 18, 2007

Preventing cross-site request forgery (XSRF, CSRF, aka one-click attack)

Filed under: penetration testing, software security, web apps — chrisweber @ 9:39 pm

The XSRF attack exploits the stateless nature of HTTP and your web application. In its essence, an attacker can trick you into taking an action against a site. To do so you would just need to visit the attacker’s site or fall victim to some phishing attack, etc. Here’s an example. Consider a web application that sells things, call it Amazonians. You have a user profile page there, where you can change your account’s email address. This is the email address where your password is sent when you click “forgot password.” When you use this page to set your email address, it’s a simple HTTP POST and the only value sent up is your password wrapped in SOAP, JSON, whatever. In order for this request to succeed however, you need a valid cookie.

Here’s the attack. You visit evil site. Evil site prepares the above request to change your email address to something that evil site now controls. By visiting evil site, your browser is automatically redirected (by virtue of a link or an XmlHttpRequest) to make that request to Amazonians. Since your browser has valid cookies for the website, the request succeeds, and now evil site owns your account.

Okay so how do we prevent this? Some options are:

a) Amazonians require that you type in your password when making the request to change your email address. Unless evil site can trick you into entering your Amazonian password, then the request would fail.

b) Amazonians require that you enter a captcha. Same as above.

c) Amazonians generates a one-time, unique random value that is sent to you when the page is requested. This value is tied to your session and required to be sent up in the Postback. Problem with this technique is that some cross-domain browser bugs may be used by the attacker to get this value. Consider the Internet Explorer mhtml:// bug.

The key to preventing the XSRF attack in the face of such browser bugs, is to really analyze the web app and understand which calls are most critical and require a solid mitigation. Obvious places include:

  • anywhere password or account information can be changed
  • anywhere data or records can be added, modified or deleted

January 15, 2007

Uninformed.org second paper on subverting PatchGuard

Filed under: reverse engineering, software security — chrisweber @ 9:13 am

Uninformed is pleased to announce the release of its sixth volume.  This volume includes 3 articles on reverse engineering and exploitation technology.  These articles include:

– Engineering in Reverse: Subverting PatchGuard Version 2
Author: Skywing

– Engineering in Reverse: Locreate: An Anagram for Relocate
Author: skape

– Exploitation Technology: Exploiting 802.11 Wireless Driver Vulnerabilities on Windows
Authors: Johnny Cache, H D Moore, skape

This volume of the journal can be found at:

http://www.uninformed.org/?v=6

January 14, 2007

How to: Fuzzing Web Services on IIS 6.0 and ASP.NET

Filed under: penetration testing, software security, web apps — chrisweber @ 12:57 pm

So we want to fuzz something SOAPy, again. Well here’s how we’re gonna do it. The approach I like to take with clients is a gray-box, or code-assisted penetration testing. Gray box analysis is a powerful technique combining input testing with source analysis, runtime tracing, profiling, and debugging to identify real issues in the software. In this example we’re taking from the last post to fuzz or not to fuzz web services. So we’ve got web services in managed code plus some unmanaged code modules handing user-input.

SOAP fuzzing should begin by taking the client requests for each service and isolating the element values to be manipulated. In the first stage of fuzzing we will change the entire value, without conforming to the value format. This should turn up gross errors in the consumption of the web service data or denial of service conditions from unexpected data formats. In the second fuzzing sweep we’ll present the value in the correct format, with just a portion of that value replaced with a malformed value. This phase should find issues that would pass a validation gateway, but still cause problems when the data is consumed.

In all fuzzing cases we will start from a perspective of a most-correct request, where only a single value is fuzzed, before fuzzing multiple values concurrently. Additional phases will be specific tests based on a deep understanding of the logic being tested, such as fuzzing a value that states UserID=5, with a range of integers.

For example, during phase one everything between the Value tags should be fuzzed as a single blob. The string “org:division/category/DATA=DATA” will be replaced as a whole with the fuzz strings.

Original

<Value>org:division/category/DATA=DATA</Value>

Fuzzed

<Value>AAAAAAAAAAAAAAAAAAAAAA</Value>

In phase two, the value will be separated into its subcomponents. For each subcomponent a fuzzed value will be inserted until all portions of the value are individually fuzzed.

Original

<Value>org:division/category/DATA</Value>

Fuzzed #1

<Value>AAAAAAAAA:division/category/DATA=DATA</Value>

Fuzzed #2

<Value>org:AAAAAAAAAAAA/category/DATA=DATA</Value>

Fuzzed #3

<Value>org:division/AAAAAAAAAAAAA/DATA=DATA</Value>

Fuzzed #4

<Value>org:division/category/AAAAAAAAAA=DATA</Value>

Fuzzed #5

<Value>org:division/category/DATA=AAAAAAAAAA</Value>

And to keep it going, fuzzing should actually be expanded quite a bit beyond the example above. In addition to fuzzing strings with other strings, INT’s should used, byte arrays, etc. Also, the separator values (e.g. : and / and =) should also included in testing. Typical payloads used in fuzzing are shown below. When testing we usually apply the relevant selection to the logic being tested.

  • Character multiples
  • Max unsigned and signed integer values
  • Variations on format strings using ‘%n’
  • Long strings
  • Empty strings and null values
  • Extended ASCII
  • Binary values
  • Base64 and HTML encoded values
  • SQL Injection
  • Common bad ASCII (‘ ” < >)
  • All numbers
  • All letters
  • All spaces
  • Invalid date formats
  • Dictionaries relevant to the application

To monitor the behavior of the web service during the fuzzing runs we attach WinDbg to the worker process with heap checking enabled. We break on any significant exceptions to investigate the call stack and relevant code sections. The event log is scrubbed for any reported errors, which are then investigated. To determine denial of service conditions we use Perfmon to observe the process’s CPU and memory usage. In our SOAP fuzzing we should also insert a unique marker into each request, and also log each sent request, so that we can later reproduce the condition that caused the error. In this case we will place an incremented number in the User Agent value of the SOAP requests, which is readable in the IIS logs. In addition, the randomness of fuzzing could be seeded with a value which would allow for reproducibility. This is possible with some of the fuzzing frameworks out there, we we haven’t talked about too much, such as Peach.

January 13, 2007

To fuzz or not to fuzz web services…

Filed under: penetration testing, software security, web apps — chrisweber @ 12:42 pm

Is it worth the time to run input fuzzing tests against web services? When engaging a client for a security review I’m often the one to pose this question. Sure, why not… right? Well honestly there’s a more precise way to answer this question. First we really need to understand the goals of the security review, so a few questions are in order.

  1. Has threat modeling been done or is this my job?
  2. How much time and budget do we have for a security review?
  3. How complex are the web services? e.g. how many parameters do they take and in what format
  4. Are the web services written in managed code?
  5. Is user-input passed to unmanaged code?

Let’s take these answers from a common scenario:

  1. Yes threat modeling is complete
  2. We have about 2 or 3 weeks that you can use to test
  3. Very complex, they use WS-Security, take hundreds of parameters, some encrypted, using custom formats, SOAP, as well as embedded XML blobs
  4. Yes, they’re written in C# using the .NET Framework
  5. Some specific elements of user-input are handled by unmanaged code modules

Some things not obvious in these questions are:

  • that the client is highly interested in finding Denial of Service (DoS) issues
  • that millions of people will be using these Web Services whether they know it or not
  • that no input fuzzing has been done to date

With 2-3 weeks we could get a lot done in a security review focused just one the web services. It’s becoming clear that fuzzing input would be a worthwhile venture. We’ll likely turn up some DoS issues, possibly some unmanaged code issues as well. Since we have a decent timeframe, we’ll be checking for the following issues, not all of which fuzzing is good for:

  • elevation of privilege (EoP)
  • repurposing attacks
  • cross-site scripting (yes, even web services in some cases)
  • information disclosure
  • session replay
  • SQL Injection
  • DTD attacks
  • XML validation
  • script injection
  • repudiation
  • denial of service
  • buffer overrun

Fuzzing will help with some of these, so at this point the answer is yes, let’s do it. We’ll also be doing some code review, which is great for identifying issues such as DoS, XML validation, and DTD attacks quickly. And we’ll be studying the specs and architecture along the way to keep a clear understanding of the system and help identify repurposing attacks, which will be tested for confirmation.

Ok let’s go!

January 12, 2007

Elevation of Privilege lowest common denominator

Filed under: penetration testing, software security, web apps — chrisweber @ 11:57 pm

Sometimes a web app EoP vulnerability is as difficult to exploit as stealing a cookie or guessing a password and other times it’s as easy as incrementing an integer. Today I was testing another web app and modifying records belonging to other users by incrementing the recordId value… I couldn’t believe it was 2007. Luckily the fix I discussed with the devs was simple but the application architecture had more severe systemic issues which allowed this.

January 11, 2007

Internet Explorer whitespace-as-comment hack to bypass input filters

Filed under: penetration testing, software security, web apps — chrisweber @ 12:02 am

When testing for XSS (cross-site scripting) issues, you often need to bypass filters and perform different sorts of encodings and other trickery. To be a good tester you also need to know how the browsers you’re concerned with behave differently. In Internet Explorer 6.0 there’s a behavior that’s allowed seemingly impassible input validation filters to be bypassed. Note that the issue is not the browser’s fault, it’s the fault of an improperly designed input validation mechanism on the server. Okay to illustrate the point.

You’re testing a web app that has an input field. Some script tags are allowed but <img src=”something”> is not. By replacing the whitespace with a comment, your code is accepted. When returned to the browser, IE 6.x, the comment is interpreted as whitespace and the code is executed fine. Test it out:

//Start HTML
<html>
<body>
<img/*comment*/src="javascript:alert('img tag')">
</body>
</html>
//End HTML

This trick can be useful for more than just bypassing filters…

January 10, 2007

IIS 6.0 %uNNNN unicode notation in the URL

Filed under: penetration testing, software security, web apps — chrisweber @ 10:18 pm

I do a lot of web app pen testing. Character encoding is always an important part of many input validation test cases. Some people don’t realize that IIS takes straight unicode notation in the URL by default. So you can pass in unicode characters just by typing the proper notation in ASCII on the URL. For example the following URL’s encode an “s”, a double quote, the Cyrillic small letter “о” which looks a lot like an “o”.

http://somesite.iis/query=unicode-character-%u0073
http://somesite.iis/query=unicode-character-%u0022
http://somesite.iis/query=unicode-character-%u043E

This is controlled by the following registry key and is enabled by default:

HKLM\System\CurrentControlSet\Services\HTTP\Parameters\PercentUAllowed

A Boolean value. If non-zero, Http.sys accepts the %uNNNN notation in request URLs.

December 26, 2006

CSIDL – Shell constants, enumerations, and flags

Filed under: penetration testing, reverse engineering, software security — chrisweber @ 2:08 pm

I worked on an application which had a couple of requirements:

  1. Allow users access to their local drive content within a defined scope (e.g. either the entire drive, or the My Documents folder only)
  2. Prevent users from accessing files outside of the defined scope. So they shouldn’t be able to access network drives, USB keys, etc.

To acheive this, the shell constants were used, as defined in the Windows SDK.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/shellcc/platform/shell/reference/enums/csidl.asp

This worked well, and after we looked at the code we actually ran a battery of tests to confirm. So for example we tried the following types of canonicalizations:

  • \\host\share\file
  • \\?\folder\file
  • \\10.10.10.10\share\file
  • \\.\folder\file

We kept going, and tried breaking out of the local scope as well:

  • ..\..\..\..\boot.ini
  • ../../../../boot.ini
  • ..%2fboot.ini

And all that sort of stuff. Using the CSIDL constants proved successful, and we could see this through debugging. Everything we entered was merely relative to the constant value, there was no way to change it.

Create a free website or blog at WordPress.com.