Debugging a .NET service that is crashing on start up

Recently I was dealing with a .NET windows service application that was failing when attempting to start the service. The exception showing up in the event log was the a “.NET Runtime 2.0 Error Reporting” unhandled framework of event type clr20r3, basically the framework logging that something is wrong (very wrong).

The actual exception logged was:

EventType clr20r3, P1 weauxpmmrp42zxajarbmdm5knch0ez4o, P2 0.4.112.0, P3 4ae90a3c, P4 nhibernate, P5 1.2.1.4000, P6 47c58d60, P7 ffd, P8 50, P9 nhibernate.mappingexception, P10 NIL.

This was annoying (to say the least) as the service had been running through all previous development and test installations, and was now failing in an environment where due to network restrictions it was not possible to debug with studio. However annoying the problem was, it was there and had to be solved.

With windows services there are ways to hook up a debugger when running or on start try to help identify issues, this KB article explains how to do this http://support.microsoft.com/kb/824344.

In this case however we had a crashing service, so it seemed to make sense to capture the crash for investigation. Using the debugging tools (http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx), specifically the script adplus.vbs (http://msdn.microsoft.com/en-us/library/cc265629.aspx) it is possible to create a memory dump of the application state at crash. It was a bit of a race situation (and I am sure there are better ways), but it was simple to start the service then run the adplus command line to generate the crash dump. So using the command:


Cscript adplus.vbs –crash –quiet –FullOnFirst –pn Service.exe

With –crash being self explanatory, –quiet to make start up quick (it was a race after all), –FullOnFirst to capture full dumps for all exceptions and –pn to identify the process name to hook. Once the adplus script starts it attaches to the process and produces a dump when the application fails.

There are loads of articles about how to read and deal with dump files a basic start is here http://support.microsoft.com/kb/315263, and whilst biased toward ASP.NET the debugging series by Tess Ferrandez at http://blogs.msdn.com/tess/pages/net-debugging-demos-information-and-setup-instructions.aspx is a great place to get introduced to the tools and techniques.

In this case the issue was identified from the information returned from the simple command “!analyze –v” on Windbg (more info on available commands can be found at http://windbg.info/doc/1-common-cmds.html). This issue was a pretty stupid simple one, our service was using nhibernate over an Oracle database using the Oracle Data Access Client. The ODAC is only delivered (at version 11.1.0.7.20) with a 32bit implementation, but the release config build of the service was not restricted to 32 bits - and so was throwing a bad image format exception on the ODAC dll. A quick change to this implementation using corflags /32BIT+, and a fix to the release configuration solved this.

Using fiddler to manually mock service response

Fiddler (http://www.fiddler2.com/fiddler2/) is one of those tools that make those virtually impossible tasks possible, sometimes even easy.

This post will describe how to use fiddler to mock responses from a service, but first I will describe the problem we faced, and why fiddler made the investigation so easy.

One of the integration points in a large SOA solution we are working on was an Oracle Application Server MTOM service being consumed by a WCF .NET 3.5 client to upload and download large data packets. All was working fine (perhaps surprisingly) until the client platform moved to .NET 3.5 SP1, where the download operation failed with an invalid MIME header exception. We did of course try to identify the issue by reading through the specifications, but after circling through the MTOM, SOAP 1.2 and MIME spec’s a few (dozen) times with no real output we decided to try another approach. This is where fiddler came in we were able to capture the response from the service and investigate the causes of the MIME header exception by pushing through amended requests – not quite so unscientific as trial and error, but not far of!

Firstly fire up fiddler execute the service and capture a response saving it to a file. An easy way to filter the requests of interest from all the other extraneous HTTP traffic is to use the Filters tab to filter traffic only from an identified host:

image

Edit the saved response as required, then on the fiddler Rules menu select the Automatic breakpoints and choose before request (short cut key F11). This will stop the request before hitting the server allowing you to supply your edited request instead. Once the client makes the request you will notice the request held in a breakpoint in fiddler:

image

Note that fiddler in the breakpoint fiddler will let you choose the response to send to the client; drop down the choose response combo in the breakpoint and select your amended response. Finally  press run to completion which will push the amended response on to the client where you can continue or debug as required.

This sort of service mocking can (of course) be automated with fiddler, which can be really useful during development, but this manual approach allowed easy editing of the response headers to allow us to identify the issue relatively quickly. For info the interop issue is with both Oracle and Microsoft. I can’t see the fix being as quick as the investigation…

Securing ELMAH with ASP.NET MVC

Disclaimer: This is one of those posts that unashamedly links off to loads of other clever people.

ELMAH get loads of love, and recently has been the subject of many posts about implementation with ASP.NET MVC. With the recent 1.0 release I thought I would build into a site due to go into production in the near future so obviously needed to protect it from evil doers. First link is a great one for describing the basic implementation of ELMAH on ASP.NET MVC.

So that's it up and running - if it took you more than 10 minutes I would be suprised!

Next to restrict access to the log target. Phil Haack posted a while back about securing ELMAH with ASP.NET which takes very little additional effort to apply up to a secured ASP.NET MVC site (obviously).

The approach is to pass the handler down to a different path by changing the handler config - in this case adding path detail logs (and changing the elmah to exceptions):

<handlers>
    <add name="Elmah" verb="POST,GET,HEAD" path="logs/exceptions.axd" 
	preCondition="integratedMode" type="Elmah.ErrorLogPageFactory, Elmah"/>
</handlers>

Note that this is for IIS7 change the httpHandlers sections for IIS6.

You can then add a location section to the system.web element of the root web.config to deny access to unauthenticated users (or roles whatever...) such as this:

<location path="logs">
	<system.web>
		<authorization>
			<deny users="?" />
		</authorization>
	</system.web>
</location>

Make sure that the routing ignore rule that you set up originally is also changed to reflect the new path - something like

public static void RegisterRoutes(RouteCollection routes)
{
    //ELMAH exception handling 
    
    routes.IgnoreRoute("logs/{resource}.axd/{*pathInfo}");
...

and that's it. Requests to logs/exceptions.axd will be protected.

Service factory merge issues...

I have already mentioned that I have been heavily using the service factory modelling edition in recent projects; very recently I got massive conflict with an SVN update to one particular service!

The servicecontract itself took an awful lot of fixing with a lot of messing around in DiffMerge to make sure all the changes from both sets of changes were correctly merged - it's not easy to merge but reading the operations and relating back is honkingly fiddly but not impossible.  Then it came to the supporting diagram XML - this was a real mess; it really only took me one look before I had to think of something else. After thinking through the hassle of rolling back my changes, then applying to the updated contract - I realised I wanted to avoid this hassle. I thought I would get into the old school "just try it and see" mode - I canned the inner XML from the diagram file, sweetly if you leave the root element and delete all content, once you open the service contract diagram it re-adds all the required elements.  So its just a case of laying out again - significantly less hassle than the other option. So I was chuffed that I saved time.

By the way this worked with the data contract that was also massively conflicted.

ASP.NET MVC, Moq, and unit testing auth/auth

This is a short post about unit testing authentication and authorisation using the framework Authorize attribute. For the most recent project we have built behind the ASP.NET provider template implementation of the IMembershipService to meet very specific customer requirements, but this testing approach remains the same if you use built in.

It was clear pretty quickly that duplication was going to happen testing authentication and authorisation so I wanted to distill the test code as much as I could. I ended up writing two helper methods to check for redirection of an unauthenticated user, and a user not in a define role - the following the helper in full:

public class AuthenticationTests
{
    public static void UnAuthorizedUserRedirects(Controller controller, string actionName)
    {
        var user = new Mock<IPrincipal>();
        user.ExpectGet(u => u.Identity.IsAuthenticated).Returns(false);
        var context = new Mock<ControllerContext>();
        context.ExpectGet(c => c.Controller).Returns(controller);
        context.ExpectGet(c => c.HttpContext.User).Returns(user.Object);
        context.ExpectSet(c => c.HttpContext.Response.StatusCode).Callback(status => Assert.AreEqual(401, status)).Verifiable();
        controller.ActionInvoker.InvokeAction(context.Object, actionName);
        context.VerifyAll();
        user.VerifyAll();
    }
    public static void UserNotInRoleRedirects(Controller controller, string actionName, string roleName)
    {
        var user = new Mock<IPrincipal>();
        user.ExpectGet(u => u.Identity.IsAuthenticated).Returns(true);
        user.Expect(u => u.IsInRole(roleName)).Returns(false);
        var context = new Mock<ControllerContext>();
        context.ExpectGet(c => c.Controller).Returns(controller);
        context.ExpectGet(c => c.HttpContext.User).Returns(user.Object);
        context.ExpectSet(c => c.HttpContext.Response.StatusCode).Callback(status => Assert.AreEqual(401, status)).Verifiable();
        controller.ActionInvoker.InvokeAction(context.Object, actionName);
        context.VerifyAll();
        user.VerifyAll();
    }
}

 

Using Moq they set expectations on a Moq'ed IPrincipal and ControllerContext to ensure that the action sets an HTTP StatusCode of 401 (unauthorised); I won't go into too much detail, you can read the code, and it's actually surprisingly straight forward...

Usage is then dead easy and keeps the tests really clean- just spin up your controller and ask one of the actions to check - for example:

[Test]
public void UnAuthenticatedUserRequestForRevokeRedirects()
{
    Mock<IMembershipService> membershipService = new Mock<IMembershipService>();
    AccountAdminController controller = new AccountAdminController(membershipService.Object);
    AuthenticationTests.UnAuthorizedUserRedirects(controller, "Revoke");
}

ASP.NET MVC RC1 Authorize

I found a wee issue with the ASP.NET MVC Web Application template with the Authorize

attribute.  Initially I was in two minds whether to convert the existing app or create a new project from template and port the code over - I took the second option as this usually causes the least amount of angst in the long run..

So the issue was found when trying to use the Authorize attribute to redirect to the template login page.  By default the has web.config with the following authentication section

<authentication mode="Forms">
	<forms loginUrl="~/Account/LogIn" />
</authentication>

but the AccountController action is called LogOn!  Note the subtle difference between LogIn and LogOn...

ASP.NET MVC and Moq

Using the Beta 1 of ASP.NET MVC I was trying to use Moq to test a controller action that used the UpdateModel using a FormCollection as the value provider. Reading Scott Guthrie's post for Beta 1 he makes it clear that it should be possible not to mock the context for the controller to descrease unit test friction - however any use of the UpdateModel generates an ArgumentNullException...

So I still had to mock - hopefully this will be worked out in the next beta but for now using Moq this is the minimum required to Mock the ControllerContext.

            var routeData = new RouteData();
            var httpContext = new Mock<HttpContextBase>();
            var controllerContext = new Mock<ControllerContext>(httpContext.Object, routeData,controller);
            controller.ControllerContext = controllerContext.Object;

NUnit binary requirement...

I like to have all dependencies for a project in the source code repository so that the solution can build for new developers straight from a check out. After having to work sift through the NUnit binary (again) to work out what I needed rather than reference the GAC, this list is as a reminder:

nunit-console-runner.dll
nunit-console.exe
nunit-console.exe.config
nunit-gui-runner.dll
nunit.core.dll
nunit.core.extensions.dll
nunit.core.interfaces.dll
nunit.exe
nunit.exe.config
nunit.framework.dll
nunit.framework.extensions.dll
nunit.mocks.dll
nunit.uikit.dll
nunit.util.dll

 

This includes the UI stuff required for running the GUI - just in case there is someone out there that doesn't have resharper...

Messed up Service Factory projects!

Using service factory modelling edition recently I managed to get a new service project into a position where it would not generate code with no indication as to why.  I knew it was my fault as I had deleted, moved and renamed projects a few times - it was one of those situations where I wanted to get the name right, but had an indecisive few minutes changed it, then eventually settled on the first name.  The self inflicted mess I managed to end up with was two projects in the project mapping table, and obviously by sheer luck I had chosen the one that referenced the deleted projects!

I resolves it by opening the ProjectMapping.xml and finding the ProjectMappingTable element that contained the correct projects references - basically this meant finding by project Id those projects that still exist in the solution file.

Configuring logging for ODP.NET

For the 11g client it's just not documented very well. The details of the settings can be found in http://download.oracle.com/docs/html/E10927_01/featConfig.htm, but you need to set the values in HKLM/ORACLE/ODP.NET/CLR Version/.  To get started set the TraceLevel to 1 and TraceFileName to something sensible and you get the method entry and exit points along with SQL logged.

Although not a great deal of use for solving or working around some of the issues we have been encountering with the 11.1.0.6.20 provider version (particularly problematic are support for the user defined object types and XMLType), but useful for debugging and identifying issues.