Tests failing with ArgumentNullException on controllerContext

So to set the scene I was slow time migrating an existing ASP.NET MVC 1.0 application to v 2.0 (for no other reason than I wanted to catch up with the new stuff as I haven’t been using MVC in recent projects). After using the automated migration toolwritten by Eilon Lipton ’of off’ the ASP.NET team, and sorting some minor issue,s I found that a few tests that were failing with ArgumentNullException on the ModelValidator controllerContext parameter when attempting to UpdateModel. I knew controllerContext was null because I wasn’t setting it, the problem was why was it required?

 

Model validation is one of the new features in v 2.0, stepping through the MVC source code (peeling through the layers of abstraction) showed that the default model binder abstract base ModelValidator requires the ControllerContext on construction. For some background I found this interesting dissection of the MVC model validation from an architectural perspective.

Clearly there are many potential solutions to this issue to get the tests green. In this case I took the somewhat simple pragmatic approach of just setting the ControllerContext on the Controller.

SOAP requests succeeding and logging an HTTP 400

Had a really annoying issue recently where WCF SOAP requests were returning successfully (HTTP 200) but apparently also logging HTTP 400 “Bad Verb” errors in HTTPERR as this small extract from the log shows:

2010-03-10 09:25:18 127.0.0.1 55897 127.0.0.1 80 - - - 400 - Verb -
2010-03-10 09:25:22 127.0.0.1 55902 127.0.0.1 80 - - - 400 - Verb -
2010-03-10 09:25:29 127.0.0.1 55905 127.0.0.1 80 - - - 400 - Verb -

This issue was happening in a large SOA solution, where each WCF service (hosted in IIS) offered a simple “Heartbeat“ operation for use by a hardware load balancer for health monitoring. It was clear that the monitors were causing the issue (as requests from other clients didn’t exhibit this unusual behaviour), what was less clear was why.

The first step was to try and see what was going on, using Network Monitor I captured a trace to see the activity, an extract from a failing trace shows the SOAP protocol request and response, and then a follow up HTTP response with the 400 exception.

750    23.640625        {TCP:85, IPv4:18}    15:36:42.551    20.20.20.179    20.20.20.34    TCP    TCP:Flags=......S., SrcPort=34678, DstPort=HTTP(80), PayloadLen=0, Seq=179182272, Ack=0, Win=5840 ( Negotiating scale factor 0x0 ) = 5840
751 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.34 20.20.20.179 TCP TCP:Flags=...A..S., SrcPort=HTTP(80), DstPort=34678, PayloadLen=0, Seq=776316902, Ack=179182273, Win=16384 ( Negotiated scale factor 0x0 ) = 16384
752 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.179 20.20.20.34 TCP TCP:Flags=...A...., SrcPort=34678, DstPort=HTTP(80), PayloadLen=0, Seq=179182273, Ack=776316903, Win=5840 (scale factor 0x0) = 5840
753 23.640625 {HTTP:86, TCP:85, IPv4:18} 15:36:42.551 20.20.20.179 20.20.20.34 SOAP SOAP:xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
754 23.640625 {HTTP:86, TCP:85, IPv4:18} 15:36:42.551 20.20.20.34 20.20.20.179 SOAP SOAP:xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"
755 23.640625 {HTTP:86, TCP:85, IPv4:18} 15:36:42.551 20.20.20.34 20.20.20.179 HTTP HTTP:Response, HTTP/1.1, Status Code = 400, URL: /Application/Service.svc
756 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.179 20.20.20.34 TCP TCP:Flags=...A...., SrcPort=34678, DstPort=HTTP(80), PayloadLen=0, Seq=179182711, Ack=776317329, Win=6432 (scale factor 0x0) = 6432
757 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.179 20.20.20.34 TCP TCP:Flags=...A...F, SrcPort=34678, DstPort=HTTP(80), PayloadLen=0, Seq=179182711, Ack=776317494, Win=7504 (scale factor 0x0) = 7504
758 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.34 20.20.20.179 TCP TCP:Flags=...A...., SrcPort=HTTP(80), DstPort=34678, PayloadLen=0, Seq=776317494, Ack=179182712, Win=65097 (scale factor 0x0) = 65097

Knowing that HTTP.sys parses the request before handing on for processing, in this case by ASP.NET, I though I may get some joy from the ETW built in – a quick hit to google turned up some decent posts about capturing and analysing these traces from the Http.sys team. This didn’t really add a lot, but confirmed that that HTTP.sys was rejecting a request.

The load balancer's monitor was a simple send and receive over TCP, posting a send string and parsing the response to check for valid state. In order to emulate the monitor I needed to get right back to basics, avoiding all the (well appreciated) layers of abstraction and start writing directly against a Socket! A really simple bit of code, it took the send string from the load balancer:

POST /Application/Service.svc HTTP/1.1
Accept-Encoding: gzip,deflate
Content-Type: text/xml;charset=UTF-8
SOAPAction: \"http://www.company.com/product/services/service/0/1/ServiceContract/Heartbeat\"
Host:
Content-Length: 136

<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"><soapenv:Header/><soapenv:Body/></soapenv:Envelope>

and just sent it direct to the socket

Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
socket.Connect("hosting_server", 80);
socket.Send(requestBytes);

string response = string.Empty;
byte[] responseBytes = new byte[socket.ReceiveBufferSize];

int i = socket.Receive(responseBytes);
response += Encoding.UTF8.GetString(responseBytes, 0, i);

socket.Close();

Note that the host is actually null in the send string, this is allowed and documented in the RFC for HTTP1.1 section 14.23, although to be honest that was the first thing I tried. So after capturing a valid response from a .NET client that did not exhibit the issue using fiddler, comparing and then scientifically fiddling with a few values to no avail, I actually had to do some reading. The answer was actually in the spec (who would have thought!) - the header field definitions in the RFC for HTTP1.1 section 14.10 describes the Connection header, and the pertinent phrase from that section was:

HTTP/1.1 applications that do not support persistent connections MUST include the "close" connection option in every message.

So the fix was actually ludicrously easy adding “Connection: close” to the header in the load balancer send string. After so much investigation effort I honestly hoped for something a little more dramatic…

Using SOAP UI to mock UDDI

In a recent SOA implementation we were relying on UDDI as  a service registry for dynamic discovery of service endpoints from the client tier. The client tier interacted with many services, and whilst (at a unit test level) we were isolate from the UDDI, when trying to manually test or just plain use the client, it was still reliant on a UDDI implementation. Obviously it’s possible to implement multiple registries to allow use in this manner, but in our case it seemed cheaper and easier (no comment as to the registry we were using) to create a mock!

We have been using SOAP UI for mocking service implementations during early phases of development for some time, so it seemed natural for one bright spark to try this with UDDI. Our UDDI implementation and therefore endpoint lookup was pretty simplistic (this was done on purpose) – each service had only a single (load balanced) endpoint that we looked up by binding key.

The following is a sample project that I know runs with SOAP UI version 3.0.1. The project file contains a demo mock UDDI project demonstrating how to lookup and return a packaged response based on a binding key. Creation of a mock service in SOAP UI is pretty straight forward the user guide is pretty good, so it should be pretty straight forward to add your own end points.

Oracle pipelined functions

I was recently looking to re-factor some rather nasty PL/SQL – granted it’s not something I would say I enjoy necessarily, but occasionally these things have to be done right… Anyway back to point, there was a rather nasty piece of dynamic SQL that was built up to query based on decoded values from another query. You have probably seen the sort of thing before – the example below shows the approach; select some encoded value (in this case a postcode based on spatial restriction) from one structure, then for each value found, decode it, and add another where clause restriction.

FOR rec IN (SELECT ps.sector
    FROM postcode_sectors ps, test_extent te
    WHERE te.key_id = p_key_id
    AND SDO_RELATE(ps.shape,  te.extent, 'mask = anyinteract') = 'TRUE')
LOOP
    utils.split_postcode(rec.sector, area, district, sector);
    postcodeWhere := postcodeWhere || '(area = ''' || area || '''' ||
          
        ' AND district = ''' || district || '''' ||
        ' AND sector = ''' || sector || '''' || ') OR ';
END LOOP;

One you have finished with the loop you would trim off the final Or and use the restriction in another query. This is often used as a solution when you have data from different sources with different encodings, it’s not pretty, but it works.

So I stumbled across an interesting feature in Oracle called pipelined functions that allows a PL/SQL function to be queried like a standard table. So in this case the function would present itself as the decoded values of postcode. For the meat of the code it was a really easy re-factor following the documentation – the following shows the inside of the loop after re-factor.

FOR rec IN (SELECT ps.sector
   FROM postcode_sectors ps, test_extent te
   WHERE te.key_id = p_key_id
   AND SDO_RELATE(ps.shape,  te.extent, 'mask = anyinteract') = 'TRUE')
LOOP
   UTILS.SPLIT_POSTCODE(rec.sector,area,district,sector);
   PIPE ROW( SPLIT_POSTCODE_TYPE(area,district,sector) );
END LOOP;

This worked really well, and whilst I left (for the first pass re-factor at least) the executed query as dynamic SQL, it looked far better, the intention was much clearer.

There was a downer however, it performed like an absolute pig. Thinking “first it giveth then taketh away” I had a quick look at he execution plan, and bit of a research to see if there was anything that could be done. Fortunately I found a very well written and thorough article describing the reasons for the performance issue and giving suggestions for setting cardinality of the pipelined functions to improve performance. I won’t describe the solution, other than saying I chose the DYNAMIC_SAMPLING optimizer hint!

JavaScript script per page in ASP.NET MVC

In ASP.NET MVC the views can get pretty big pretty quick, so i think anything you can do to keep them lean is worth trying. I wanted to come up with a way of allowing javascript for a view page to be held in its own a separate script file, and be served only it it exists.

I wanted to have the script file sit alongside with the same name (apart from extension) as the view, and be served from this location; I thought this would be a nice convention, and from a glance it would be possible to see which views had scripts. The following is a less abstracted (ie: shorter and easier to write up) version of what I came up with, but should be more than enough info to show how it can be done…

By default the Views sub folder web.config file blocks any request for resources as MVC does not serve pages. The first change required is to amend this to block requests for *.aspx and serve *.js using the System.Web.StaticFileHandler. Something like the following from the httpHandlers section (remembering this will also need to be done in the system.webServer handlers section for IIS7 in integrated mode:

<httpHandlers>
	<add path="*.aspx" verb="*" type="System.Web.HttpNotFoundHandler"/>
	<add verb="GET,HEAD" path="*.js" type="System.Web.StaticFileHandler"/>
</httpHandlers>

In the solution I already had a base class for the site master page. The following code was added to the OnPreRender for the site master base:

string scriptPath = Page.AppRelativeVirtualPath.Replace(".aspx", ".js");
if (HostingEnvironment.VirtualPathProvider.FileExists(scriptPath))
    Page.Header.AddScript(scriptPath);

It gets the script path from the current page virtual path (replacing aspx with js to locate the views script file along side), then using the VirtualPathProvider from the HostingEnvironment adds a script ref to the header if the script file exists. Note that the AddScript is a simple extension method on HtmlHead:

public static void AddScript(this HtmlHead head, string src)
{
    HtmlGenericControl script = new HtmlGenericControl { TagName = "script" };
    script.Attributes.Add("type", "text/javascript");
    script.Attributes.Add("src", head.Page.ResolveClientUrl(src));
    head.Controls.Add(script);
    head.Controls.Add(new LiteralControl("\n"));
}

that will just render a script tag in the header for the supplied script src.

So you end up being able to place your js files as you require alongside the view, and keep all the page specific javascript (and lovely jQuery) stuff out of the view and in its own file.

Debugging a .NET service that is crashing on start up

Recently I was dealing with a .NET windows service application that was failing when attempting to start the service. The exception showing up in the event log was the a “.NET Runtime 2.0 Error Reporting” unhandled framework of event type clr20r3, basically the framework logging that something is wrong (very wrong).

The actual exception logged was:

EventType clr20r3, P1 weauxpmmrp42zxajarbmdm5knch0ez4o, P2 0.4.112.0, P3 4ae90a3c, P4 nhibernate, P5 1.2.1.4000, P6 47c58d60, P7 ffd, P8 50, P9 nhibernate.mappingexception, P10 NIL.

This was annoying (to say the least) as the service had been running through all previous development and test installations, and was now failing in an environment where due to network restrictions it was not possible to debug with studio. However annoying the problem was, it was there and had to be solved.

With windows services there are ways to hook up a debugger when running or on start try to help identify issues, this KB article explains how to do this http://support.microsoft.com/kb/824344.

In this case however we had a crashing service, so it seemed to make sense to capture the crash for investigation. Using the debugging tools (http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx), specifically the script adplus.vbs (http://msdn.microsoft.com/en-us/library/cc265629.aspx) it is possible to create a memory dump of the application state at crash. It was a bit of a race situation (and I am sure there are better ways), but it was simple to start the service then run the adplus command line to generate the crash dump. So using the command:


Cscript adplus.vbs –crash –quiet –FullOnFirst –pn Service.exe

With –crash being self explanatory, –quiet to make start up quick (it was a race after all), –FullOnFirst to capture full dumps for all exceptions and –pn to identify the process name to hook. Once the adplus script starts it attaches to the process and produces a dump when the application fails.

There are loads of articles about how to read and deal with dump files a basic start is here http://support.microsoft.com/kb/315263, and whilst biased toward ASP.NET the debugging series by Tess Ferrandez at http://blogs.msdn.com/tess/pages/net-debugging-demos-information-and-setup-instructions.aspx is a great place to get introduced to the tools and techniques.

In this case the issue was identified from the information returned from the simple command “!analyze –v” on Windbg (more info on available commands can be found at http://windbg.info/doc/1-common-cmds.html). This issue was a pretty stupid simple one, our service was using nhibernate over an Oracle database using the Oracle Data Access Client. The ODAC is only delivered (at version 11.1.0.7.20) with a 32bit implementation, but the release config build of the service was not restricted to 32 bits - and so was throwing a bad image format exception on the ODAC dll. A quick change to this implementation using corflags /32BIT+, and a fix to the release configuration solved this.

Using fiddler to manually mock service response

Fiddler (http://www.fiddler2.com/fiddler2/) is one of those tools that make those virtually impossible tasks possible, sometimes even easy.

This post will describe how to use fiddler to mock responses from a service, but first I will describe the problem we faced, and why fiddler made the investigation so easy.

One of the integration points in a large SOA solution we are working on was an Oracle Application Server MTOM service being consumed by a WCF .NET 3.5 client to upload and download large data packets. All was working fine (perhaps surprisingly) until the client platform moved to .NET 3.5 SP1, where the download operation failed with an invalid MIME header exception. We did of course try to identify the issue by reading through the specifications, but after circling through the MTOM, SOAP 1.2 and MIME spec’s a few (dozen) times with no real output we decided to try another approach. This is where fiddler came in we were able to capture the response from the service and investigate the causes of the MIME header exception by pushing through amended requests – not quite so unscientific as trial and error, but not far of!

Firstly fire up fiddler execute the service and capture a response saving it to a file. An easy way to filter the requests of interest from all the other extraneous HTTP traffic is to use the Filters tab to filter traffic only from an identified host:

image

Edit the saved response as required, then on the fiddler Rules menu select the Automatic breakpoints and choose before request (short cut key F11). This will stop the request before hitting the server allowing you to supply your edited request instead. Once the client makes the request you will notice the request held in a breakpoint in fiddler:

image

Note that fiddler in the breakpoint fiddler will let you choose the response to send to the client; drop down the choose response combo in the breakpoint and select your amended response. Finally  press run to completion which will push the amended response on to the client where you can continue or debug as required.

This sort of service mocking can (of course) be automated with fiddler, which can be really useful during development, but this manual approach allowed easy editing of the response headers to allow us to identify the issue relatively quickly. For info the interop issue is with both Oracle and Microsoft. I can’t see the fix being as quick as the investigation…

Securing ELMAH with ASP.NET MVC

Disclaimer: This is one of those posts that unashamedly links off to loads of other clever people.

ELMAH get loads of love, and recently has been the subject of many posts about implementation with ASP.NET MVC. With the recent 1.0 release I thought I would build into a site due to go into production in the near future so obviously needed to protect it from evil doers. First link is a great one for describing the basic implementation of ELMAH on ASP.NET MVC.

So that's it up and running - if it took you more than 10 minutes I would be suprised!

Next to restrict access to the log target. Phil Haack posted a while back about securing ELMAH with ASP.NET which takes very little additional effort to apply up to a secured ASP.NET MVC site (obviously).

The approach is to pass the handler down to a different path by changing the handler config - in this case adding path detail logs (and changing the elmah to exceptions):

<handlers>
    <add name="Elmah" verb="POST,GET,HEAD" path="logs/exceptions.axd" 
	preCondition="integratedMode" type="Elmah.ErrorLogPageFactory, Elmah"/>
</handlers>

Note that this is for IIS7 change the httpHandlers sections for IIS6.

You can then add a location section to the system.web element of the root web.config to deny access to unauthenticated users (or roles whatever...) such as this:

<location path="logs">
	<system.web>
		<authorization>
			<deny users="?" />
		</authorization>
	</system.web>
</location>

Make sure that the routing ignore rule that you set up originally is also changed to reflect the new path - something like

public static void RegisterRoutes(RouteCollection routes)
{
    //ELMAH exception handling 
    
    routes.IgnoreRoute("logs/{resource}.axd/{*pathInfo}");
...

and that's it. Requests to logs/exceptions.axd will be protected.

Service factory merge issues...

I have already mentioned that I have been heavily using the service factory modelling edition in recent projects; very recently I got massive conflict with an SVN update to one particular service!

The servicecontract itself took an awful lot of fixing with a lot of messing around in DiffMerge to make sure all the changes from both sets of changes were correctly merged - it's not easy to merge but reading the operations and relating back is honkingly fiddly but not impossible.  Then it came to the supporting diagram XML - this was a real mess; it really only took me one look before I had to think of something else. After thinking through the hassle of rolling back my changes, then applying to the updated contract - I realised I wanted to avoid this hassle. I thought I would get into the old school "just try it and see" mode - I canned the inner XML from the diagram file, sweetly if you leave the root element and delete all content, once you open the service contract diagram it re-adds all the required elements.  So its just a case of laying out again - significantly less hassle than the other option. So I was chuffed that I saved time.

By the way this worked with the data contract that was also massively conflicted.

ASP.NET MVC, Moq, and unit testing auth/auth

This is a short post about unit testing authentication and authorisation using the framework Authorize attribute. For the most recent project we have built behind the ASP.NET provider template implementation of the IMembershipService to meet very specific customer requirements, but this testing approach remains the same if you use built in.

It was clear pretty quickly that duplication was going to happen testing authentication and authorisation so I wanted to distill the test code as much as I could. I ended up writing two helper methods to check for redirection of an unauthenticated user, and a user not in a define role - the following the helper in full:

public class AuthenticationTests
{
    public static void UnAuthorizedUserRedirects(Controller controller, string actionName)
    {
        var user = new Mock<IPrincipal>();
        user.ExpectGet(u => u.Identity.IsAuthenticated).Returns(false);
        var context = new Mock<ControllerContext>();
        context.ExpectGet(c => c.Controller).Returns(controller);
        context.ExpectGet(c => c.HttpContext.User).Returns(user.Object);
        context.ExpectSet(c => c.HttpContext.Response.StatusCode).Callback(status => Assert.AreEqual(401, status)).Verifiable();
        controller.ActionInvoker.InvokeAction(context.Object, actionName);
        context.VerifyAll();
        user.VerifyAll();
    }
    public static void UserNotInRoleRedirects(Controller controller, string actionName, string roleName)
    {
        var user = new Mock<IPrincipal>();
        user.ExpectGet(u => u.Identity.IsAuthenticated).Returns(true);
        user.Expect(u => u.IsInRole(roleName)).Returns(false);
        var context = new Mock<ControllerContext>();
        context.ExpectGet(c => c.Controller).Returns(controller);
        context.ExpectGet(c => c.HttpContext.User).Returns(user.Object);
        context.ExpectSet(c => c.HttpContext.Response.StatusCode).Callback(status => Assert.AreEqual(401, status)).Verifiable();
        controller.ActionInvoker.InvokeAction(context.Object, actionName);
        context.VerifyAll();
        user.VerifyAll();
    }
}

 

Using Moq they set expectations on a Moq'ed IPrincipal and ControllerContext to ensure that the action sets an HTTP StatusCode of 401 (unauthorised); I won't go into too much detail, you can read the code, and it's actually surprisingly straight forward...

Usage is then dead easy and keeps the tests really clean- just spin up your controller and ask one of the actions to check - for example:

[Test]
public void UnAuthenticatedUserRequestForRevokeRedirects()
{
    Mock<IMembershipService> membershipService = new Mock<IMembershipService>();
    AccountAdminController controller = new AccountAdminController(membershipService.Object);
    AuthenticationTests.UnAuthorizedUserRedirects(controller, "Revoke");
}