Software and web application security

March 5, 2007

ASP.NET AJAX 1.0 Source Code Released

Filed under: general — chrisweber @ 12:29 pm

Read about it on Scott Gu’s ASP.NET blog

http://weblogs.asp.net/scottgu/archive/2007/01/30/asp-net-ajax-1-0-source-code-released.aspx

Advertisements

February 19, 2007

Web Services denial of service attacks – XmlTextReader

Filed under: general — chrisweber @ 4:59 pm

Most Web Services I look at are built using the .NET Framework and ASP.NET. Today we’re seeing more with ASP.NET’s AJAX extensions but that’s a different story. Many developers choose to implement SOAP and XML as part of their WS solution, and in doing so can inadvertently open the application server up to DoS issues.

First there’s XML. When developers choose to implement XmlTextReader or XmlReader from the .NET Framework, they need to understand the behaviors of these classes. MSDN documents this quite well. I will usually do a quick code review to find implementations of these objects, because the issues can be identified a little faster through code than through testing.

XmlTextReader defaults to allowing external DTD’s to be specified. This leads to a whole enchilda of issues, and gives attackers a nice bit of control over the host server. Be sure to set the ProhibitDTD property equal to true. Furthermore, there’s no strict schema validation unless the developer implements one.SOAP is fine, but developers need to implement a custom SOAP extension to enforce strict schema validation. Otherwise it gets pretty easy for an attacker to abuse the WS by embedding things like:

  • large payloads
  • large number of elements
  • nested elements
  • malformed data

To name a few… Without strict validation, I’ve seen web services easily abused. For example, by sending a few large requests, it becomes trivial to consume memory on the host server which eventually leads to resource starvation. To learn more about implementing a custom SOAP Extension to tackle this problem, read the MSDN article:

http://msdn.microsoft.com/msdnmag/issues/03/07/XMLSchemaValidation/

February 10, 2007

Thinkpad as Wii?

Filed under: general — chrisweber @ 11:19 pm

Sweet, accelerometer used for much more than just the Active Protection System.

http://www.lenovoblogs.com/insidethebox/?p=55

February 6, 2007

Checking ntoskrnl for rootkit

Filed under: general — chrisweber @ 12:27 pm

This is not new, but I needed it the other day and wanted to post it here for memory. In Microsoft’s kernel debugger tool ‘kd‘ the following command checks for binary corruption in every loaded module.  Note also that you can do this with Sysinternal’s ‘LiveKd‘ for easier on the fly debugging.

kd> !for_each_module !chkimg @#ModuleName

Make sure you have trusted modules to compare against first. And point to them before running the above command with:

kd> .exepath c:\Windows\system32

January 25, 2007

How to view recovery passwords for Windows Vista Bitlocker

Filed under: general — chrisweber @ 10:21 am

Came across this and just wanted to mark it in case I ever need it.  How to use the BitLocker Recovery Password Viewer for Active Directory Users and Computers tool to view recovery passwords for Windows Vista.

http://support.microsoft.com/?kbid=928202

January 18, 2007

Preventing cross-site request forgery (XSRF, CSRF, aka one-click attack)

Filed under: penetration testing, software security, web apps — chrisweber @ 9:39 pm

The XSRF attack exploits the stateless nature of HTTP and your web application. In its essence, an attacker can trick you into taking an action against a site. To do so you would just need to visit the attacker’s site or fall victim to some phishing attack, etc. Here’s an example. Consider a web application that sells things, call it Amazonians. You have a user profile page there, where you can change your account’s email address. This is the email address where your password is sent when you click “forgot password.” When you use this page to set your email address, it’s a simple HTTP POST and the only value sent up is your password wrapped in SOAP, JSON, whatever. In order for this request to succeed however, you need a valid cookie.

Here’s the attack. You visit evil site. Evil site prepares the above request to change your email address to something that evil site now controls. By visiting evil site, your browser is automatically redirected (by virtue of a link or an XmlHttpRequest) to make that request to Amazonians. Since your browser has valid cookies for the website, the request succeeds, and now evil site owns your account.

Okay so how do we prevent this? Some options are:

a) Amazonians require that you type in your password when making the request to change your email address. Unless evil site can trick you into entering your Amazonian password, then the request would fail.

b) Amazonians require that you enter a captcha. Same as above.

c) Amazonians generates a one-time, unique random value that is sent to you when the page is requested. This value is tied to your session and required to be sent up in the Postback. Problem with this technique is that some cross-domain browser bugs may be used by the attacker to get this value. Consider the Internet Explorer mhtml:// bug.

The key to preventing the XSRF attack in the face of such browser bugs, is to really analyze the web app and understand which calls are most critical and require a solid mitigation. Obvious places include:

  • anywhere password or account information can be changed
  • anywhere data or records can be added, modified or deleted

January 15, 2007

Uninformed.org second paper on subverting PatchGuard

Filed under: reverse engineering, software security — chrisweber @ 9:13 am

Uninformed is pleased to announce the release of its sixth volume.  This volume includes 3 articles on reverse engineering and exploitation technology.  These articles include:

– Engineering in Reverse: Subverting PatchGuard Version 2
Author: Skywing

– Engineering in Reverse: Locreate: An Anagram for Relocate
Author: skape

– Exploitation Technology: Exploiting 802.11 Wireless Driver Vulnerabilities on Windows
Authors: Johnny Cache, H D Moore, skape

This volume of the journal can be found at:

http://www.uninformed.org/?v=6

January 14, 2007

How to: Fuzzing Web Services on IIS 6.0 and ASP.NET

Filed under: penetration testing, software security, web apps — chrisweber @ 12:57 pm

So we want to fuzz something SOAPy, again. Well here’s how we’re gonna do it. The approach I like to take with clients is a gray-box, or code-assisted penetration testing. Gray box analysis is a powerful technique combining input testing with source analysis, runtime tracing, profiling, and debugging to identify real issues in the software. In this example we’re taking from the last post to fuzz or not to fuzz web services. So we’ve got web services in managed code plus some unmanaged code modules handing user-input.

SOAP fuzzing should begin by taking the client requests for each service and isolating the element values to be manipulated. In the first stage of fuzzing we will change the entire value, without conforming to the value format. This should turn up gross errors in the consumption of the web service data or denial of service conditions from unexpected data formats. In the second fuzzing sweep we’ll present the value in the correct format, with just a portion of that value replaced with a malformed value. This phase should find issues that would pass a validation gateway, but still cause problems when the data is consumed.

In all fuzzing cases we will start from a perspective of a most-correct request, where only a single value is fuzzed, before fuzzing multiple values concurrently. Additional phases will be specific tests based on a deep understanding of the logic being tested, such as fuzzing a value that states UserID=5, with a range of integers.

For example, during phase one everything between the Value tags should be fuzzed as a single blob. The string “org:division/category/DATA=DATA” will be replaced as a whole with the fuzz strings.

Original

<Value>org:division/category/DATA=DATA</Value>

Fuzzed

<Value>AAAAAAAAAAAAAAAAAAAAAA</Value>

In phase two, the value will be separated into its subcomponents. For each subcomponent a fuzzed value will be inserted until all portions of the value are individually fuzzed.

Original

<Value>org:division/category/DATA</Value>

Fuzzed #1

<Value>AAAAAAAAA:division/category/DATA=DATA</Value>

Fuzzed #2

<Value>org:AAAAAAAAAAAA/category/DATA=DATA</Value>

Fuzzed #3

<Value>org:division/AAAAAAAAAAAAA/DATA=DATA</Value>

Fuzzed #4

<Value>org:division/category/AAAAAAAAAA=DATA</Value>

Fuzzed #5

<Value>org:division/category/DATA=AAAAAAAAAA</Value>

And to keep it going, fuzzing should actually be expanded quite a bit beyond the example above. In addition to fuzzing strings with other strings, INT’s should used, byte arrays, etc. Also, the separator values (e.g. : and / and =) should also included in testing. Typical payloads used in fuzzing are shown below. When testing we usually apply the relevant selection to the logic being tested.

  • Character multiples
  • Max unsigned and signed integer values
  • Variations on format strings using ‘%n’
  • Long strings
  • Empty strings and null values
  • Extended ASCII
  • Binary values
  • Base64 and HTML encoded values
  • SQL Injection
  • Common bad ASCII (‘ ” < >)
  • All numbers
  • All letters
  • All spaces
  • Invalid date formats
  • Dictionaries relevant to the application

To monitor the behavior of the web service during the fuzzing runs we attach WinDbg to the worker process with heap checking enabled. We break on any significant exceptions to investigate the call stack and relevant code sections. The event log is scrubbed for any reported errors, which are then investigated. To determine denial of service conditions we use Perfmon to observe the process’s CPU and memory usage. In our SOAP fuzzing we should also insert a unique marker into each request, and also log each sent request, so that we can later reproduce the condition that caused the error. In this case we will place an incremented number in the User Agent value of the SOAP requests, which is readable in the IIS logs. In addition, the randomness of fuzzing could be seeded with a value which would allow for reproducibility. This is possible with some of the fuzzing frameworks out there, we we haven’t talked about too much, such as Peach.

January 13, 2007

To fuzz or not to fuzz web services…

Filed under: penetration testing, software security, web apps — chrisweber @ 12:42 pm

Is it worth the time to run input fuzzing tests against web services? When engaging a client for a security review I’m often the one to pose this question. Sure, why not… right? Well honestly there’s a more precise way to answer this question. First we really need to understand the goals of the security review, so a few questions are in order.

  1. Has threat modeling been done or is this my job?
  2. How much time and budget do we have for a security review?
  3. How complex are the web services? e.g. how many parameters do they take and in what format
  4. Are the web services written in managed code?
  5. Is user-input passed to unmanaged code?

Let’s take these answers from a common scenario:

  1. Yes threat modeling is complete
  2. We have about 2 or 3 weeks that you can use to test
  3. Very complex, they use WS-Security, take hundreds of parameters, some encrypted, using custom formats, SOAP, as well as embedded XML blobs
  4. Yes, they’re written in C# using the .NET Framework
  5. Some specific elements of user-input are handled by unmanaged code modules

Some things not obvious in these questions are:

  • that the client is highly interested in finding Denial of Service (DoS) issues
  • that millions of people will be using these Web Services whether they know it or not
  • that no input fuzzing has been done to date

With 2-3 weeks we could get a lot done in a security review focused just one the web services. It’s becoming clear that fuzzing input would be a worthwhile venture. We’ll likely turn up some DoS issues, possibly some unmanaged code issues as well. Since we have a decent timeframe, we’ll be checking for the following issues, not all of which fuzzing is good for:

  • elevation of privilege (EoP)
  • repurposing attacks
  • cross-site scripting (yes, even web services in some cases)
  • information disclosure
  • session replay
  • SQL Injection
  • DTD attacks
  • XML validation
  • script injection
  • repudiation
  • denial of service
  • buffer overrun

Fuzzing will help with some of these, so at this point the answer is yes, let’s do it. We’ll also be doing some code review, which is great for identifying issues such as DoS, XML validation, and DTD attacks quickly. And we’ll be studying the specs and architecture along the way to keep a clear understanding of the system and help identify repurposing attacks, which will be tested for confirmation.

Ok let’s go!

January 12, 2007

Elevation of Privilege lowest common denominator

Filed under: penetration testing, software security, web apps — chrisweber @ 11:57 pm

Sometimes a web app EoP vulnerability is as difficult to exploit as stealing a cookie or guessing a password and other times it’s as easy as incrementing an integer. Today I was testing another web app and modifying records belonging to other users by incrementing the recordId value… I couldn’t believe it was 2007. Luckily the fix I discussed with the devs was simple but the application architecture had more severe systemic issues which allowed this.

« Newer PostsOlder Posts »

Blog at WordPress.com.