Re-factoring unit tests to use automatic mocks

Working as a consultant with lots of different teams helping to promote engineering practices I often hear ‘reasons’ as to why teams and developers don’t or can’t write automated tests. Most developers and teams want to do things that will ultimately make their lives easier, but sometimes hurdles and issues seem too big to overcome, and so they stop trying! One conversation with a particular developer a while back went along the lines of “the tests slow me down”. This intrigued me as personally I find that TDD speeds me up - so I asked a few more questions and paired with him to see his approach. Long story short I found that when building this developer changes his mind and his constructor injected dependencies a lot, and without auto mocking he was changing his mocks and construction more than his code.

Everyone’s approach is different – that’s what makes the world so great right – so when I listened and understood the pain I was able to offer a solution. I think all too often we just say “do this” or “do that” without looking at the reasons people may be struggling…

I will try and demonstrate the change in approach to auto mocking that helped in this case using a massively fictitious (and equally tragic) example.

We have an article service that returns articles based on a string Identifier (presumably to display somewhere). The first test written using NUnit and Moq to read the article looks like so:

1 [Test] 2 public void returns_read_article() 3 { 4 var repository = new Mock<IArticleRepository>(); 5 repository.Setup(r => r.Get(It.IsAny<string>())).Returns(new Article()); 6 7 var service = new ArticleService(repository.Object); 8 9 var article = service.Get(Guid.NewGuid().ToString()); 10 Assert.IsNotNull(article); 11 }

And the simplest ‘production’ code looks like:

1 public class ArticleService 2 { 3 private readonly IArticleRepository _repository; 4 5 public ArticleService(IArticleRepository repository) 6 { 7 _repository = repository; 8 } 9 10 public Article Get(string articleId) 11 { 12 return _repository.Get(articleId); 13 } 14 }

I told you it was a tragic example right!

So the issue this developer had was that whenever he changed his mind and needed to introduce another dependency, all of his tests need to be changed (OK in this case it’s one test, but if you have 20 tests that’s a bit more work). Let’s simulate the change with another tragic example – let’s say we have been asked to change the system to record the popularity of articles – we decide that the easiest way is to change our ArticleService to use another dependency to record the request of read of the article.

Obviously as soon as we add another dependency the test code will need to be change too. So what are our choices – 1. give up 2. Put up with it and re-factor all of the tests 3. try and find a way to focus our use of mocked dependencies. 1 and 2 have already been tried! So lets try 3!

How about we automatically create Moq’s for the service – like when we use a DI container, just injecting Moq’s that we can control. Most test/mocking frameworks already have a capability – they aren’t too tricky to build, but as we are using Moq how about we NuGet install and use AutoMoq.

1 [Test] 2 public void returns_read_article() 3 { 4 AutoMoqer moqer = new AutoMoqer(); 5 moqer.GetMock<IArticleRepository>().Setup(r => r.Get(It.IsAny<string>())).Returns(new Article()); 6 7 var service = moqer.Create<ArticleService>(); 8 9 var article = service.Get(Guid.NewGuid().ToString()); 10 Assert.IsNotNull(article); 11 }

Simple changes – we new up an AutoMoqer tell it to use a Moq with the same Setup for our repository then create our service using the Moqer – tests run green. Now we can add our test for the new feature:

1 [Test] 2 public void records_article_read() 3 { 4 AutoMoqer moqer = new AutoMoqer(); 5 moqer.GetMock<IArticlePopularityManager>().Setup(r => r.RecordRead(It.IsAny<string>())).Verifiable(); 6 7 var service = moqer.Create<ArticleService>(); 8 9 service.Get(Guid.NewGuid().ToString()); 10 moqer.GetMock<IArticlePopularityManager>().Verify(); 11 }

This time we only deal with our new popularity manager (I know tragic!) – obviously the test fails – so we step by step change the implementation until we get green tests and we end up with:

1 public class ArticleService 2 { 3 private readonly IArticleRepository _repository; 4 private readonly IArticlePopularityManager _articlePopularityManager; 5 6 public ArticleService(IArticleRepository repository, 7 IArticlePopularityManager articlePopularityManager) 8 { 9 _repository = repository; 10 _articlePopularityManager = articlePopularityManager; 11 } 12 13 public Article Get(string articleId) 14 { 15 _articlePopularityManager.RecordRead(articleId); 16 return _repository.Get(articleId); 17 } 18 } 19

Our existing test (apart from our re-factor to auto mock) was untouched and still runs green – so we can now amend dependencies to our hearts content only affecting tests relying on those dependencies. So our developer is happy that he can write tests without this issue slowing him down.

Really simple. Before you start thinking this is still a bit kek with all the duplication ‘n stuff – go and have a look at the AutoMoqTestFixture in the AutoMoq.Helpers namespace, then re-factor again!

Aspects of logging - love and hate

It's fair to say I have a love hate relationship with Logging. I love when logging can help identify the issue - when it's enough but not too much! But I hate when logging code obfuscates the reason of the code. Go on just think back to all those code bases you have seen where littered with Log.Write statements that were clearly put in as the result of a very pressured debug session - not that you would have ever done this of course...

One of the simple things I like to do early on in projects is build in a capability to 'configurably' log method entry and exit within my code base. With parameter details this is more often than not enough for those times when your 'actual' logging isn't giving you enough to diagnose the issue. This allows me to essentially ignore logging at the start of development, as I can use this approach to help diagnose, and steer my 'actual' logging to make it targeted and useful rather than overly verbose and redundant!

I have blogged about this approachbefore (apparently quite a while ago) - but as there have been some significant breaking changes to Interception in Unity v 3.5 and the Enterprise Library v 6 logging application block, I thought a refresh was in order.

This approach is obviously possible and probably just as easy with other containers and logging tools - so there really is no excuse for random Log.Write statements everywhere!

So lets start from scratch - first we are going to add the NuGet package Unity.Interception which will also bring in the Unity package. We create a container and register a simple test interface and implementation.

UnityContainer container = new UnityContainer();
container.RegisterType<ICommand, TestCommand>(); 

We resolve that and execute our test command:

ICommand command = container.Resolve<ICommand>();
command.Execute(); 

Simple - so now we have a container we can look to add the configurable logging capability we want.
We are going to use Enterprise Library for logging and also the configurable policy - so add EnterpriseLibrary.Logging and
EnterpriseLibrary.PolicyInjection packages.

Enterprise Libarary logging needs to be configured, for flexibility I tend to use the Xml configuration approach using the config tool that comes with the full binary download, but it can also be done using a fluent interface. There is plenty of documentation on getting started with the logging application block. Because in this simple approach we are going to use the static LogWriter we need to bootstrap logging after we have configured it by calling:

Logger.SetLogWriter(new LogWriterFactory().Create()); 

Then we need to change the creation of the container to enable the Interception extension:

UnityContainer container = new UnityContainer();
container.AddNewExtension<Interception>();
container.LoadConfiguration(); 

Note that we are loading our external configuration LoadConfiguration - which at this point it will fail as we haven't yet sorted the Unity config (be patient).

We also need to change our registrations to add our interception behaviour - we are just using a simple interface interceptor here as our code will be all interface driven!

container.RegisterType<ICommand, TestCommand>(
	new InterceptionBehavior<PolicyInjectionBehavior>(),
         new Interceptor<InterfaceInterceptor>()); 

Obviously this could make registrations a bit verbose - so you can push this it into an extension method.

The configuration is the only fiddly bit. Unity has a fluent config api so we could add:

            container.Configure<Interception>()
              .AddPolicy("logging")
              .AddMatchingRule<AssemblyMatchingRule>(
                new InjectionConstructor(
                  new InjectionParameter("Library")))
              .AddCallHandler<LogCallHandler>(
                new ContainerControlledLifetimeManager(),
                new InjectionConstructor(
                  9001, true, true,
                  "start", "finish", true, false, true, 10, 1)); 

after the container construction. This configuration adds a policy (called "logging") with an assembly matching rule that intercepts everything from assembly "Library" and uses the LogCallHander (from the policy injection block) to log before and after (check out the constructor of LogCallHandler to see what all those bools and ints do!). Useful, but ideally we want to be able to drop the policy in without code change.

So using UNity design time configuration - we create the following Unity config section:

<unity xmlns="http://schemas.microsoft.com/practices/2010/unity">
    <sectionExtension type="Microsoft.Practices.Unity.InterceptionExtension.Configuration.InterceptionConfigurationExtension, Microsoft.Practices.Unity.Interception.Configuration" /> 

    <container>
      <extension type="Interception" />
      <interception>
        <policy name="logging">
          <matchingRule name="library" type="AssemblyMatchingRule">
            <constructor>
              <param name="assemblyName" value="Library" />
            </constructor>
          </matchingRule>
          <callHandler name="LogHandler" type="Microsoft.Practices.EnterpriseLibrary.Logging.PolicyInjection.LogCallHandler,Microsoft.Practices.EnterpriseLibrary.PolicyInjection">
            <constructor>
              <param name="eventId" value="9001" />
              <param name="logBeforeCall" value="true" />
              <param name="logAfterCall" value="true" />
              <param name="beforeMessage" value="Before" />
              <param name="afterMessage" value="After" />
              <param name="includeParameters" value="true" />
              <param name="includeCallStack" value="true" />
              <param name="includeCallTime" value="true" />
              <param name="priority" value="3" />
            </constructor>
          </callHandler>
        </policy>
      </interception>
    </container>
  </unity> 

  don't forget to add the section definition into configSections:

    <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration"/> 

This is doing the same as our fluent configuration - now you can see the named call handler constructor arguments being passed to the LogCallHandler which is applied as part of the "logging" policy with an assembly matching rule. We can then amend the Enterprise Library logging config to allow capture of parameters.

Now we can simply change the config file to remove the logging policy totally, or introduce more specific matchingRules to target the area of the code base that we haven't logged properly yet!

Problems that can happen - they never deliver!

The more businesses and development organisations I work with, the more I see common problem “themes”. In this series of articles I am trying to highlight the problems and show the potential solutions to some of these common symptoms.

“The team can’t deliver anything!”

“They are always late!”

I usually hear this as a general perception from those outside of the development team or those not involved in the day to day delivery of software.

Perception is reality- so there is a problem. What can we do.

Clarify goals and vision

If the development team can list *loads* of stuff they have done, then it may be that the delivery focus is off – so stuff is going out the door, but it’s not what the business need. Ensure the development goals for the deliveries are clear, valuable, understood and agreed by everyone not just the team. Whilst the development team may care that we are “re-factoring to remove the anti-pattern generic repositories to remove the leaky abstraction” (or whatever) – you know that means nothing to the guy selling it! Make sure the goals and vision mean something.

Advertise

It may be that stuff is actually getting out the door, but no one knows. Obviously you need to make sure people know the team have solved this problem or added that feature. This becomes exponentially easier once you have clarified your goals and vision, ensuring they are aligned with business value and understood by all. In fact ultimately if you can aim to measure progress using delivery of goals and value then advertisement is built in!

Do less to do more

When the team are actually really struggling to ship stuff that needs a bit more investigation. One of the reasons I find often is that teams and even organisations trying to do too many things concurrently. If the team can list 50 projects that they are working on, chances are they aren’t going to deliver any of them to time as they are spending most of their time multiplexing. Kanban tells us to limit the amount of work in play to maximise the output. This works in small scale for the team’s day to day work or in larger scale for the organisational release planning. For me this is all about focus, start with one thing - the most important – complete it, then move onto the next. Only attempt to do more stuff in parallel when you have learned how to deliver successfully – even then be mindful not to fall back into the trap of trying to do too much…

When TDD bites back!

Dramatic title I know – this is really a post about lessons learned and trusting your TDD instincts (your TDD spidey sense!).

First of all I need to explain some history. In the distant past I was leading part of a programme for a customer with multiple teams building software iteratively in an agile manner. There were two teams building in our company - as ever we were all trying to do things to the best of our ability and knowledge. During one of the sprint reviews that were part of our development heartbeat I noted that one part of the system had been built in a, shall we say less than optimal way, with unit tests that were, er, fragile! In essence it was one big lump of code (pretty unreadable), with tests that had clearly been engineered after the fact – testing everything from the top level as large scale integration tests, very, very slowly. My TDD spidey senses were definitely tingling!

But. It was a relatively small bit of code, it worked to spec, and we weren’t exactly running perfectly to schedule. Despite knowing it was an impact to us all as the large tests were slowing down our CI - I was persuaded that we take it on as technical debt, and funnily enough the debt was never paid back!

Now returning to the not so distant past - I returned to the customer to do some consultancy work and spoke to them about the code they had inherited. Largely they had got on really well with the code base that was handed over, with one obvious exception! Any guesses? Talking to them, they had needed to make changes, attempted to but struggled, describing the code as unreadable and unmaintainable – with the tests being more hindrance than help. Apparently after spending a couple of sprints trying to re-factor, they decided they needed to rip and replace!

As if I didn’t already have enough reasons to trust my TDD spidey sense…

So perhaps a better title would have been something like “When TDD is done badly it can seriously impact the maintainability of your code base” or “Just saying you do TDD doesn’t mean your code will be any good” – no where near as dramatic though!

NHibernate connect to Oracle from .NET without a client installation

Recently (for reasons that seem to ridiculous to document) I needed to see if I could use the 11.2 version of the Oracle client from an NHibernate application without impacting an installation of 11.1 already on the machine. After a little bit of investigation (and honestly some trial and error) discovered that the Oracle client can delivered as a packaged part of a .NET application. In my case this allowed the client to use a later version than installed on the server, but also allows the use of the Oracle client without having any client installed.

Firstly you need to deliver the necessary binaries to the bin directory, so from the Oracle downloadsget the right version of the xcopy install binary for your platform. In this case I was using the 32 bit version ODAC112030Xcopy_32bit. The binaries required are:

oci.dll from instantclient_11_2
Oracle.DataAccess.dll from odp.net4\odp.net\bin\4
oraociei11.dll from instantclient_11_2
OraOps11w.dll from odp.net4\bin

Without the standard TNSNAMES lookup process there are several alternative approaches to connection configuration:
 

Easy connect

The easy connect option allows you to supply enough to connect to the instance; this is limited to server, port and instance not allowing some of the extended features supported by local (TNS) naming.

<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
  <session-factory>
    <property name="dialect">NHibernate.Dialect.Oracle10gDialect</property>
    <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property>
    <property name="connection.connection_string">Data Source=oracle-server:1521/orcl;Persist Security Info=True;User ID=SCOTT;Password=TIGER</property>
    <property name="connection.driver_class">NHibernate.Driver.OracleDataClientDriver</property>
  </session-factory>
</hibernate-configuration>

Embed the TNS entry in the connection string

You can embed the connection details from you TNSNAMES.ORA directly in the connection string. All you need to do is put the TNS entry directly in data source on a single line.

<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
  <session-factory>
    <property name="dialect">NHibernate.Dialect.Oracle10gDialect</property>
    <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property>
    <property name="connection.connection_string">Data Source=(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oracle-server)(PORT = 1521)))(CONNECT_DATA = (SERVICE_NAME = ORCL)));Persist Security Info=True;User ID=SCOTT;Password=TIGER</property>
    <property name="connection.driver_class">NHibernate.Driver.OracleDataClientDriver</property>
  </session-factory>
</hibernate-configuration>

This lets you use any additional TNS stuff (like load balancing), but is obviously pretty unwieldy from a deployment and maintenance perspective!

Local TNSNAME.ORA

You can actually deploy a TNSNAMES.ORA file into the bin directory and have connection details picked up from here. So the configuration will just use a data source with a TNS name value (in this case test).

<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
  <session-factory>
    <property name="dialect">NHibernate.Dialect.Oracle10gDialect</property>
    <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property>
    <property name="connection.connection_string">Data Source=test;Persist Security Info=True;User ID=SCOTT;Password=TIGER</property>
    <property name="connection.driver_class">NHibernate.Driver.OracleDataClientDriver</property>
  </session-factory>
</hibernate-configuration>

which refers to a value in TNSNAMES.ORA file delivered to the bin directory containing:

TEST =
 (DESCRIPTION = 
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = oracle-server)(PORT = 1521))
   )
 (CONNECT_DATA =
   (SERVICE_NAME = ORCL)
 )
)

Enterprise Library log to console

I was recently writing an application that required some background processing to be handled by either a windows service or a console app, it was a multithreaded app and I was using Enterprise Library to handle exceptions by policy (logging to event log for monitoring). When I was testing the console app I realised it should really be writing the exceptions to the console as well as the event log – otherwise what’s the point of the console right!

A quick bit of investigation (and I mean really quick) and I find out Enterprise Library logging block allows you to use the ConsoleTraceListener from the System.Diagnostics namespace – so adding:

   1:  <add listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.SystemDiagnosticsTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=6.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
   2:      type="System.Diagnostics.ConsoleTraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
   3:      name="System Diagnostics Console Trace Listener" />

to the config of the console app, and configuring to use this listener as well was all that was needed.

Text search in MongoDB using C#

During a recent prototype development we found out we needed a decent search solution - normally this is right where we would turn to Lucene.Net. Lucene.Net is great, but does have some code overhead associated to it (managing indexes etc.), so the fact that we were already using MongoDB and they had just introduced a beta feature for text search (as of version 2.4) seemed to good to overlook!

Upgrading to the latest version of MongoDB was simple and totally issue free, and the instructions for enabling the feature and setting up the indexes required in http://docs.mongodb.org/manual/core/text-search/ were very clear. All that was required was to start the process with a parameter to enable the text search, and then creating indexes which for my requirements were simple from the console:

db.Activity.ensureIndex(
                           {
                             Title: "text",
                             Description: "text",
                             AlsoKnownAs: "text",
                             Keywords: "text"
                           },
                           {
                             name: "ActivityFullTextIndex"
                           }
                         )

The above creating a text index for my collection (Activity) on the Title, Description, AlsoKnownAs and Keywords properties.

So after the indexes were created using the console, all that remained was actually using the search feature.

There is no direct implementation (yet) in the official C# driverso it requires calling the command directly. Reading the unit tests shows just how easy this is. The search operation in all its glory (not that much glory to be fair):

   1:  public IEnumerable<T> Search<T>(string search) where T : class, new()
   2:  {
   3:      var textSearchCommand = new CommandDocument
   4:          {
   5:              { "text", typeof(T).Name },
   6:              { "search", search }
   7:          };
   8:      var commandResult = _database.RunCommandAs<TextSearchCommandResult<T>>(textSearchCommand);
   9:   
  10:      return commandResult.Ok ? commandResult.Results.OrderBy(t => t.score).Select(t => t.obj) : null;
  11:  }

The command document is created to search the collection identified by the supplied generic type T and is supplied a search term. Using _database (an instance of MongoDatabase) we run the command using RunCommandAs returning the results in TextSearchCommandResult (coming soon). In this prototype code if the command result is Ok we return the result objects – ordered by score – obviously this is passed on to render the search result.

So you are now thinking that TextSearchCommandResult must be really complicated, ‘cos the search bit was a doddle right:

   1:  public class TextSearchCommandResult<T> : CommandResult
   2:  {
   3:      public IEnumerable<TextSearchResult<T>> Results
   4:      {
   5:          get
   6:          {
   7:              var results = this.Response["results"].AsBsonArray.Select(row => row.AsBsonDocument);
   8:              var resultObjects = results.Select(item => item.AsBsonDocument);
   9:   
  10:              return resultObjects.Select(row => BsonSerializer.Deserialize<TextSearchResult<T>>(row));
  11:          }
  12:      }
  13:  }
  14:   
  15:  public class TextSearchResult<T>
  16:  {
  17:      public T obj { get; set; }
  18:      public double score { get; set; }
  19:  }

Wrong. Using the CommandResult base class the heavy lifting is done. All that is done here is to deserialize the identified objects into a simple TextSearchResult wrapper (simply to include the score for ordering – there is more info returned, but this prototype only needed the score).

Pretty quick to get text search up and running. Clearly this is still in beta, and doesn’t have the depth of the Lucene.Net implementation yet. Definitely one to keep an eye on though.

Adoption of test driven development

When promoting TDD to teams of developers or even individuals you often get arguments against adoption – I put this down largely to resistance to change – after all users hate change!

In this sort of situation you can adopt many approaches, from the extremes of yelling “just do it” if you are their manager to to “just walking away” and letting them carry on doing what they are used to if its not your team. I am not proud to say that over the years I have tried both extreme approaches, surprisingly neither of which were terribly effective, and have also tried various flavours of the bits in the middle. The most success in achieving adoption has been when the benefits are made (or become) clear to the developer; I have found that once one team member grok’s it , more often than not they become an evangelist, and before long the whole team see the benefits.

A LOT of the arguments against can be boiled down to “writing tests as well will take me longer to develop” and this is my favourite to dispel! In these cases I like to walk through an example, more often than not a simple conversation will suffice, but on particularly stubborn examples a pair style code walk through can work really well.

I think the key is getting the example context understandable to the team or developer you are working with, for example pick a change they have recently made rather than some ridiculous canned example. As we are talking through I like to try to get the developer to turn his analytical mind on his own development process - it’s something I think as a profession we need to continually do, analyse the “how” you are doing stuff as well as the “what” you are doing.

The true light-bulb moments, when the developer realises that he can do this so much more effectively are often when a user interface is involved. If you can make the developer see that his process is write some new code to make a change, compile, run (possibly through the UI) to the point where his code might get executed, step through in a debugger, rinse and repeat ad nauseam. If you can then make him see that his “run through the UI to the point where his code might get executed” has to be repeated many times and actually takes a significant amount of time – even if it is just two button clicks! Hopefully before you have to say it they will already see that they could optimize out that step!

One of the other things to remember is patience or “baby steps”, don’t push too hard too fast. If they only realise they can be more efficient during development just by writing tests to execute their code, let them practice this – highlight other areas for improvement and benefits they could expect, but remember that if you get too preachy with the gospel according to TDD then there is the definite possibility of finding yourself back at square 1 with resistance to change.

Export to Excel xlsx from ASP.NET MVC

First some background. During a recent project we had built an ASP.NET MVC 3 application that allowed users to display lists of data filtering by search criteria. It was all pretty standard stuff, controller actions taking search parameters, requesting data from a repository and passing this data as model content in ViewResult for display. We had a fair number of these actions defined when the customer requested the capability to download the result lists into an excel format for offline analysis. So we wanted to come up with a solution that re-used the existing actions, with minimal impact.

Firstly we built an ActionResult that would return the model data in an xlsx format. This was actually easier than I expected thanks to the Open XML SDK. The solution was really cheap:

public class DownloadViewAsExcelResult : PartialViewResult
{
public DownloadViewAsExcelResult(string viewName, object model)
{
base.ViewName = viewName;
base.ViewData.Model = model;
}

public override void ExecuteResult(ControllerContext context)
{
StringBuilder builder = new StringBuilder();
StringWriter writer = new StringWriter(builder);

ViewEngineResult result = null;
if (View == null)
{
result = FindView(context);
View = result.View;
}

ViewContext viewContext = new ViewContext(context, View, ViewData, TempData, writer);
View.Render(viewContext, writer);

XDocument format = XDocument.Load(new StringReader(builder.ToString()));
Stream xlsxStream = new SpreadsheetBuilder().FromFormatXml(format);

WriteFile(context.HttpContext, xlsxStream);

if (result != null)
result.ViewEngine.ReleaseView(context, View);
}

private static void WriteFile(HttpContextBase context, Stream content)
{
context.Response.Clear();
context.Response.AddHeader("content-disposition", "attachment;filename=download.xlsx");
context.Response.Charset = "";
context.Response.Cache.SetCacheability(HttpCacheability.NoCache);
context.Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
content.CopyTo(context.Response.OutputStream);
context.Response.End();
}
}

It extends the PartialViewResult to allow location of a named partial view; this view renders the model as an xml document structured into a known format so that it can be easilt built into a spreadsheet and returned to the response. The re-use of view engine and razor was pragmatic – seemed overkill to add anything else! The format used in this case was:

<book>
<sheet name="mandatory sheet tab name" header="optional header and footer text">
<row>
<cell>@Html.DisplayFor(m => m.Property)</cell>
</row>
</sheet>
</book>

Really simple – a book element containing sheets, which in turn contain rows with cells of data. With this format the spreadhseet builder just reads the xml writing the output to an Open Xml SDK SpreadsheetDocument like so:

public class SpreadsheetBuilder
{
public Stream FromFormatXml(XDocument format)
{
MemoryStream stream = new MemoryStream();
using (SpreadsheetDocument document = SpreadsheetDocument.Create(stream, SpreadsheetDocumentType.Workbook))
{
WorkbookPart workbookpart = document.AddWorkbookPart();
workbookpart.Workbook = new Workbook();
document.WorkbookPart.Workbook.AppendChild<Sheets>(new Sheets());

var sheets = from element in format.Elements("book").Elements("sheet") select element;
foreach (var element in sheets)
{
AddWorksheet(document, element);
}
}
stream.Position = 0;

return stream;
}

private void AddWorksheet(SpreadsheetDocument document, XElement sheetFormat)
{
SheetData sheetData = BuildSheetData(sheetFormat);

WorksheetPart worksheetPart = document.WorkbookPart.AddNewPart<WorksheetPart>();

worksheetPart.Worksheet = new Worksheet(sheetData);

XAttribute headerAttribute = sheetFormat.Attribute("header");
if (headerAttribute != null)
worksheetPart.Worksheet.AppendChild<HeaderFooter>(CreateHeaderFooter(headerAttribute.Value));

Sheets sheets = document.WorkbookPart.Workbook.Descendants<Sheets>().First();

XAttribute nameAttribute = sheetFormat.Attribute("name");
Sheet sheet = new Sheet()
{
SheetId = (UInt32)(sheets.Count() + 1),
Id = document.WorkbookPart.GetIdOfPart(worksheetPart),
Name = (nameAttribute == null) ? "Sheet " + (sheets.Count() + 1) : nameAttribute.Value
};
sheets.AppendChild(sheet);
}

private HeaderFooter CreateHeaderFooter(string message)
{
HeaderFooter header = new HeaderFooter();
OddHeader oddHeader = new OddHeader();
oddHeader.Text = "&C" + message;
OddFooter oddFooter = new OddFooter();
oddFooter.Text = "&C" + message;

header.AppendChild<OddHeader>(oddHeader);
header.AppendChild<OddFooter>(oddFooter);

return header;
}

private SheetData BuildSheetData(XElement sheetFormat)
{
SheetData sheetData = new SheetData();

int rowIndex = 0;
var rows = from element in sheetFormat.Elements("row") select element;
foreach (var rowElement in rows)
{
rowIndex++;
Row row = new Row() { RowIndex = (UInt32)rowIndex };
var cells = from element in rowElement.Elements("cell") select element;
foreach (var cellElement in cells)
{
Cell c = new Cell { DataType = CellValues.InlineString };
InlineString inlineString = new InlineString();
Text t = new Text { Text = cellElement.Value };
inlineString.AppendChild(t);
c.AppendChild(inlineString);

row.AppendChild(c);
}

sheetData.AppendChild(row);
}

return sheetData;
}
}

Now all that was needed was a mechanism to use this result – for this we chose to add an ActionFilterAttribute to each action supporting excel download. This attribute just checks for the existence of a format value equal to “excel”, replacing the result with an instance of our DownloadViewAsExcelResult with view name changed to read from Export sub folder in views when found.

public class ExcelViewDownloadAttribute : ActionFilterAttribute
{
public string ExportViewName { get; set; }

public override void OnActionExecuted(ActionExecutedContext filterContext)
{
base.OnActionExecuted(filterContext);

object model = filterContext.Controller.ViewData.Model;
if (model == null)
return;

ValueProviderResult value = filterContext.Controller.ValueProvider.GetValue("format");
if (value != null && value.AttemptedValue.Equals("excel", StringComparison.InvariantCultureIgnoreCase))
{
var exportView = GetExportViewName(filterContext.ActionDescriptor.ActionName);
DownloadViewAsExcelResult result = new DownloadViewAsExcelResult(exportView, model);
filterContext.Result = result;
}
}

private string GetExportViewName(string actionName)
{
if (string.IsNullOrEmpty(ExportViewName))
ExportViewName = actionName;

return "Export/" + ExportViewName;
}
}

This attribute is then added (along with export view) to all actions requiring excel download support:

[ExcelViewDownload(ExportViewName = "Index")]

Setting cursor in Bing Maps AJAX control (v7.0)

I have been massively sporadic blogging recently – no excuses – I just have…

I have been using the v7.0 of the bing maps ajax controlon a project and wanted to set the cursor on mouse hover of a Pushpin. The idea was that as we had changed the default marker to a different size, and had a click event to show details we needed some way to give the user feedback that the pin was clickable. Pretty standard stuff. Cool thing is that this version made this a total doddle… Short post then!

Looking at the rendered source the ‘map’ element has a MicrosoftMap class and sets the cursor as style on this element

<div class="MicrosoftMap" style="z-index: 0; overflow-x: hidden; overflow-y: hidden; 
background-color: rgb(255, 245, 242); position: absolute; left: 0px; top: 0px; right: 0px; bottom: 0px;
cursor: url(http://ecn.dev.virtualearth.net/mapcontrol/v7.0/cursors/grab.cur), move; "
>

so adding the two handlers

Microsoft.Maps.Events.addHandler(pin, 'mouseover', pinMouseHover);
Microsoft.Maps.Events.addHandler(pin, 'mouseout', pinMouseHover);

then the handing code including a simple bit of jQuery to select by the class

function pinMouseHover(e) {
var cursor = (e.eventName === 'mouseover') ? 'pointer'
: 'url(http://ecn.dev.virtualearth.net/mapcontrol/v7.0/cursors/grab.cur), move';
setMapCursor(cursor);
}

function setMapCursor(cursor) {
$('.MicrosoftMap').css('cursor', cursor);
}

and it’s sorted. Doddle – told you…