Oracle pipelined functions

I was recently looking to re-factor some rather nasty PL/SQL – granted it’s not something I would say I enjoy necessarily, but occasionally these things have to be done right… Anyway back to point, there was a rather nasty piece of dynamic SQL that was built up to query based on decoded values from another query. You have probably seen the sort of thing before – the example below shows the approach; select some encoded value (in this case a postcode based on spatial restriction) from one structure, then for each value found, decode it, and add another where clause restriction.

FOR rec IN (SELECT ps.sector
    FROM postcode_sectors ps, test_extent te
    WHERE te.key_id = p_key_id
    AND SDO_RELATE(ps.shape,  te.extent, 'mask = anyinteract') = 'TRUE')
LOOP
    utils.split_postcode(rec.sector, area, district, sector);
    postcodeWhere := postcodeWhere || '(area = ''' || area || '''' ||
          
        ' AND district = ''' || district || '''' ||
        ' AND sector = ''' || sector || '''' || ') OR ';
END LOOP;

One you have finished with the loop you would trim off the final Or and use the restriction in another query. This is often used as a solution when you have data from different sources with different encodings, it’s not pretty, but it works.

So I stumbled across an interesting feature in Oracle called pipelined functions that allows a PL/SQL function to be queried like a standard table. So in this case the function would present itself as the decoded values of postcode. For the meat of the code it was a really easy re-factor following the documentation – the following shows the inside of the loop after re-factor.

FOR rec IN (SELECT ps.sector
   FROM postcode_sectors ps, test_extent te
   WHERE te.key_id = p_key_id
   AND SDO_RELATE(ps.shape,  te.extent, 'mask = anyinteract') = 'TRUE')
LOOP
   UTILS.SPLIT_POSTCODE(rec.sector,area,district,sector);
   PIPE ROW( SPLIT_POSTCODE_TYPE(area,district,sector) );
END LOOP;

This worked really well, and whilst I left (for the first pass re-factor at least) the executed query as dynamic SQL, it looked far better, the intention was much clearer.

There was a downer however, it performed like an absolute pig. Thinking “first it giveth then taketh away” I had a quick look at he execution plan, and bit of a research to see if there was anything that could be done. Fortunately I found a very well written and thorough article describing the reasons for the performance issue and giving suggestions for setting cardinality of the pipelined functions to improve performance. I won’t describe the solution, other than saying I chose the DYNAMIC_SAMPLING optimizer hint!

JavaScript script per page in ASP.NET MVC

In ASP.NET MVC the views can get pretty big pretty quick, so i think anything you can do to keep them lean is worth trying. I wanted to come up with a way of allowing javascript for a view page to be held in its own a separate script file, and be served only it it exists.

I wanted to have the script file sit alongside with the same name (apart from extension) as the view, and be served from this location; I thought this would be a nice convention, and from a glance it would be possible to see which views had scripts. The following is a less abstracted (ie: shorter and easier to write up) version of what I came up with, but should be more than enough info to show how it can be done…

By default the Views sub folder web.config file blocks any request for resources as MVC does not serve pages. The first change required is to amend this to block requests for *.aspx and serve *.js using the System.Web.StaticFileHandler. Something like the following from the httpHandlers section (remembering this will also need to be done in the system.webServer handlers section for IIS7 in integrated mode:

<httpHandlers>
	<add path="*.aspx" verb="*" type="System.Web.HttpNotFoundHandler"/>
	<add verb="GET,HEAD" path="*.js" type="System.Web.StaticFileHandler"/>
</httpHandlers>

In the solution I already had a base class for the site master page. The following code was added to the OnPreRender for the site master base:

string scriptPath = Page.AppRelativeVirtualPath.Replace(".aspx", ".js");
if (HostingEnvironment.VirtualPathProvider.FileExists(scriptPath))
    Page.Header.AddScript(scriptPath);

It gets the script path from the current page virtual path (replacing aspx with js to locate the views script file along side), then using the VirtualPathProvider from the HostingEnvironment adds a script ref to the header if the script file exists. Note that the AddScript is a simple extension method on HtmlHead:

public static void AddScript(this HtmlHead head, string src)
{
    HtmlGenericControl script = new HtmlGenericControl { TagName = "script" };
    script.Attributes.Add("type", "text/javascript");
    script.Attributes.Add("src", head.Page.ResolveClientUrl(src));
    head.Controls.Add(script);
    head.Controls.Add(new LiteralControl("\n"));
}

that will just render a script tag in the header for the supplied script src.

So you end up being able to place your js files as you require alongside the view, and keep all the page specific javascript (and lovely jQuery) stuff out of the view and in its own file.

Debugging a .NET service that is crashing on start up

Recently I was dealing with a .NET windows service application that was failing when attempting to start the service. The exception showing up in the event log was the a “.NET Runtime 2.0 Error Reporting” unhandled framework of event type clr20r3, basically the framework logging that something is wrong (very wrong).

The actual exception logged was:

EventType clr20r3, P1 weauxpmmrp42zxajarbmdm5knch0ez4o, P2 0.4.112.0, P3 4ae90a3c, P4 nhibernate, P5 1.2.1.4000, P6 47c58d60, P7 ffd, P8 50, P9 nhibernate.mappingexception, P10 NIL.

This was annoying (to say the least) as the service had been running through all previous development and test installations, and was now failing in an environment where due to network restrictions it was not possible to debug with studio. However annoying the problem was, it was there and had to be solved.

With windows services there are ways to hook up a debugger when running or on start try to help identify issues, this KB article explains how to do this http://support.microsoft.com/kb/824344.

In this case however we had a crashing service, so it seemed to make sense to capture the crash for investigation. Using the debugging tools (http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx), specifically the script adplus.vbs (http://msdn.microsoft.com/en-us/library/cc265629.aspx) it is possible to create a memory dump of the application state at crash. It was a bit of a race situation (and I am sure there are better ways), but it was simple to start the service then run the adplus command line to generate the crash dump. So using the command:


Cscript adplus.vbs –crash –quiet –FullOnFirst –pn Service.exe

With –crash being self explanatory, –quiet to make start up quick (it was a race after all), –FullOnFirst to capture full dumps for all exceptions and –pn to identify the process name to hook. Once the adplus script starts it attaches to the process and produces a dump when the application fails.

There are loads of articles about how to read and deal with dump files a basic start is here http://support.microsoft.com/kb/315263, and whilst biased toward ASP.NET the debugging series by Tess Ferrandez at http://blogs.msdn.com/tess/pages/net-debugging-demos-information-and-setup-instructions.aspx is a great place to get introduced to the tools and techniques.

In this case the issue was identified from the information returned from the simple command “!analyze –v” on Windbg (more info on available commands can be found at http://windbg.info/doc/1-common-cmds.html). This issue was a pretty stupid simple one, our service was using nhibernate over an Oracle database using the Oracle Data Access Client. The ODAC is only delivered (at version 11.1.0.7.20) with a 32bit implementation, but the release config build of the service was not restricted to 32 bits - and so was throwing a bad image format exception on the ODAC dll. A quick change to this implementation using corflags /32BIT+, and a fix to the release configuration solved this.

Using fiddler to manually mock service response

Fiddler (http://www.fiddler2.com/fiddler2/) is one of those tools that make those virtually impossible tasks possible, sometimes even easy.

This post will describe how to use fiddler to mock responses from a service, but first I will describe the problem we faced, and why fiddler made the investigation so easy.

One of the integration points in a large SOA solution we are working on was an Oracle Application Server MTOM service being consumed by a WCF .NET 3.5 client to upload and download large data packets. All was working fine (perhaps surprisingly) until the client platform moved to .NET 3.5 SP1, where the download operation failed with an invalid MIME header exception. We did of course try to identify the issue by reading through the specifications, but after circling through the MTOM, SOAP 1.2 and MIME spec’s a few (dozen) times with no real output we decided to try another approach. This is where fiddler came in we were able to capture the response from the service and investigate the causes of the MIME header exception by pushing through amended requests – not quite so unscientific as trial and error, but not far of!

Firstly fire up fiddler execute the service and capture a response saving it to a file. An easy way to filter the requests of interest from all the other extraneous HTTP traffic is to use the Filters tab to filter traffic only from an identified host:

image

Edit the saved response as required, then on the fiddler Rules menu select the Automatic breakpoints and choose before request (short cut key F11). This will stop the request before hitting the server allowing you to supply your edited request instead. Once the client makes the request you will notice the request held in a breakpoint in fiddler:

image

Note that fiddler in the breakpoint fiddler will let you choose the response to send to the client; drop down the choose response combo in the breakpoint and select your amended response. Finally  press run to completion which will push the amended response on to the client where you can continue or debug as required.

This sort of service mocking can (of course) be automated with fiddler, which can be really useful during development, but this manual approach allowed easy editing of the response headers to allow us to identify the issue relatively quickly. For info the interop issue is with both Oracle and Microsoft. I can’t see the fix being as quick as the investigation…

Securing ELMAH with ASP.NET MVC

Disclaimer: This is one of those posts that unashamedly links off to loads of other clever people.

ELMAH get loads of love, and recently has been the subject of many posts about implementation with ASP.NET MVC. With the recent 1.0 release I thought I would build into a site due to go into production in the near future so obviously needed to protect it from evil doers. First link is a great one for describing the basic implementation of ELMAH on ASP.NET MVC.

So that's it up and running - if it took you more than 10 minutes I would be suprised!

Next to restrict access to the log target. Phil Haack posted a while back about securing ELMAH with ASP.NET which takes very little additional effort to apply up to a secured ASP.NET MVC site (obviously).

The approach is to pass the handler down to a different path by changing the handler config - in this case adding path detail logs (and changing the elmah to exceptions):

<handlers>
    <add name="Elmah" verb="POST,GET,HEAD" path="logs/exceptions.axd" 
	preCondition="integratedMode" type="Elmah.ErrorLogPageFactory, Elmah"/>
</handlers>

Note that this is for IIS7 change the httpHandlers sections for IIS6.

You can then add a location section to the system.web element of the root web.config to deny access to unauthenticated users (or roles whatever...) such as this:

<location path="logs">
	<system.web>
		<authorization>
			<deny users="?" />
		</authorization>
	</system.web>
</location>

Make sure that the routing ignore rule that you set up originally is also changed to reflect the new path - something like

public static void RegisterRoutes(RouteCollection routes)
{
    //ELMAH exception handling 
    
    routes.IgnoreRoute("logs/{resource}.axd/{*pathInfo}");
...

and that's it. Requests to logs/exceptions.axd will be protected.

Service factory merge issues...

I have already mentioned that I have been heavily using the service factory modelling edition in recent projects; very recently I got massive conflict with an SVN update to one particular service!

The servicecontract itself took an awful lot of fixing with a lot of messing around in DiffMerge to make sure all the changes from both sets of changes were correctly merged - it's not easy to merge but reading the operations and relating back is honkingly fiddly but not impossible.  Then it came to the supporting diagram XML - this was a real mess; it really only took me one look before I had to think of something else. After thinking through the hassle of rolling back my changes, then applying to the updated contract - I realised I wanted to avoid this hassle. I thought I would get into the old school "just try it and see" mode - I canned the inner XML from the diagram file, sweetly if you leave the root element and delete all content, once you open the service contract diagram it re-adds all the required elements.  So its just a case of laying out again - significantly less hassle than the other option. So I was chuffed that I saved time.

By the way this worked with the data contract that was also massively conflicted.

ASP.NET MVC, Moq, and unit testing auth/auth

This is a short post about unit testing authentication and authorisation using the framework Authorize attribute. For the most recent project we have built behind the ASP.NET provider template implementation of the IMembershipService to meet very specific customer requirements, but this testing approach remains the same if you use built in.

It was clear pretty quickly that duplication was going to happen testing authentication and authorisation so I wanted to distill the test code as much as I could. I ended up writing two helper methods to check for redirection of an unauthenticated user, and a user not in a define role - the following the helper in full:

public class AuthenticationTests
{
    public static void UnAuthorizedUserRedirects(Controller controller, string actionName)
    {
        var user = new Mock<IPrincipal>();
        user.ExpectGet(u => u.Identity.IsAuthenticated).Returns(false);
        var context = new Mock<ControllerContext>();
        context.ExpectGet(c => c.Controller).Returns(controller);
        context.ExpectGet(c => c.HttpContext.User).Returns(user.Object);
        context.ExpectSet(c => c.HttpContext.Response.StatusCode).Callback(status => Assert.AreEqual(401, status)).Verifiable();
        controller.ActionInvoker.InvokeAction(context.Object, actionName);
        context.VerifyAll();
        user.VerifyAll();
    }
    public static void UserNotInRoleRedirects(Controller controller, string actionName, string roleName)
    {
        var user = new Mock<IPrincipal>();
        user.ExpectGet(u => u.Identity.IsAuthenticated).Returns(true);
        user.Expect(u => u.IsInRole(roleName)).Returns(false);
        var context = new Mock<ControllerContext>();
        context.ExpectGet(c => c.Controller).Returns(controller);
        context.ExpectGet(c => c.HttpContext.User).Returns(user.Object);
        context.ExpectSet(c => c.HttpContext.Response.StatusCode).Callback(status => Assert.AreEqual(401, status)).Verifiable();
        controller.ActionInvoker.InvokeAction(context.Object, actionName);
        context.VerifyAll();
        user.VerifyAll();
    }
}

 

Using Moq they set expectations on a Moq'ed IPrincipal and ControllerContext to ensure that the action sets an HTTP StatusCode of 401 (unauthorised); I won't go into too much detail, you can read the code, and it's actually surprisingly straight forward...

Usage is then dead easy and keeps the tests really clean- just spin up your controller and ask one of the actions to check - for example:

[Test]
public void UnAuthenticatedUserRequestForRevokeRedirects()
{
    Mock<IMembershipService> membershipService = new Mock<IMembershipService>();
    AccountAdminController controller = new AccountAdminController(membershipService.Object);
    AuthenticationTests.UnAuthorizedUserRedirects(controller, "Revoke");
}

ASP.NET MVC RC1 Authorize

I found a wee issue with the ASP.NET MVC Web Application template with the Authorize

attribute.  Initially I was in two minds whether to convert the existing app or create a new project from template and port the code over - I took the second option as this usually causes the least amount of angst in the long run..

So the issue was found when trying to use the Authorize attribute to redirect to the template login page.  By default the has web.config with the following authentication section

<authentication mode="Forms">
	<forms loginUrl="~/Account/LogIn" />
</authentication>

but the AccountController action is called LogOn!  Note the subtle difference between LogIn and LogOn...

ASP.NET MVC and Moq

Using the Beta 1 of ASP.NET MVC I was trying to use Moq to test a controller action that used the UpdateModel using a FormCollection as the value provider. Reading Scott Guthrie's post for Beta 1 he makes it clear that it should be possible not to mock the context for the controller to descrease unit test friction - however any use of the UpdateModel generates an ArgumentNullException...

So I still had to mock - hopefully this will be worked out in the next beta but for now using Moq this is the minimum required to Mock the ControllerContext.

            var routeData = new RouteData();
            var httpContext = new Mock<HttpContextBase>();
            var controllerContext = new Mock<ControllerContext>(httpContext.Object, routeData,controller);
            controller.ControllerContext = controllerContext.Object;

NUnit binary requirement...

I like to have all dependencies for a project in the source code repository so that the solution can build for new developers straight from a check out. After having to work sift through the NUnit binary (again) to work out what I needed rather than reference the GAC, this list is as a reminder:

nunit-console-runner.dll
nunit-console.exe
nunit-console.exe.config
nunit-gui-runner.dll
nunit.core.dll
nunit.core.extensions.dll
nunit.core.interfaces.dll
nunit.exe
nunit.exe.config
nunit.framework.dll
nunit.framework.extensions.dll
nunit.mocks.dll
nunit.uikit.dll
nunit.util.dll

 

This includes the UI stuff required for running the GUI - just in case there is someone out there that doesn't have resharper...