App Insights querying counts

I have been using (and loving) App Insights a lot recently and one of the things that have really impressed me is the capability and power of the queries when analysing usage patterns. One thing that caught me out however, was counting the number of requests when the sampling was active – in my case when the site was getting a lot of traffic during load testing.

Creating a simple chart showing number of requests per minute over the last hour using:

1 requests 2 | where timestamp > ago(1h) 3 | summarize count() by bin(timestamp, 1m) 4 | render timechart 5
1 requests 2 | where timestamp > ago(1h) 3 | summarize count() by bin(timestamp, 1m) 4 | render timechart

Was showing far less than anticipated after my load tests?

AI_Count

Turns out (if you actually read the docs) this is directly called out:

https://docs.microsoft.com/en-us/azure/application-insights/app-insights-analytics-reference#count

So remember to use the sum(itemCount) approach:

1 requests 2 | where timestamp > ago(1h) 3 | summarize sum(itemCount) by bin(timestamp, 1m) 4 | render timechart

AI_SumItemCount

Fairly significant difference!

1 requests 2 | where timestamp > ago(1h) 3 | summarize count() by bin(timestamp, 1m) 4 | render timechart 5

Treat solution architecture like code

We have all seen code bases that are hard to read and follow, long methods, repeated code, loads of parameters and all the other smells that literally keep us awake at night (or is that just me?)!

The larger and more complex your system gets, a non-existent or difficult to read architectural view of the system can be as bad for maintainability as a difficult to read code base. The larger your system gets (and the more people you have working on it), you will likely increase maintainability by putting attention into abstract architectural views. These views will give a point of reference allowing the team to see what they are building into, easily assess the impact of change - and perhaps more importantly support the architectural re-factoring that is often neglected.

What form this view takes (beyond the obvious boxes and lines) is very dependent upon the nature of the system. For example with very large systems with complex interactions then I find views showing the dynamic behaviour of the 'deliverables' (or the independently versioned components) gives a high enough abstract view - enough to see the dependencies across important scenarios. Often I use UML collaboration or communication diagrams, for easy explanation of the meaning, but anything that the team can understand will do.

With smaller systems you may need to go below the deliverable to see interactions between sub components. Regardless of the approach I find the best way is, just like your code, to iterate it. Ensure it is valuable to the team and proving useful. Reflect often and if you find questions not getting answered by the abstract views adjust them.

Quieten down noisy HTTP headers

This is not a new article, and blatantly steals from two other extremely helpful sources on this topic namely

https://www.troyhunt.com/shhh-dont-let-your-response-headers/ and

https://www.saotn.org/remove-iis-server-version-http-response-header/. The reason for this article is to update the references slightly as I found a few changes during a recent ASP.NET MVC deployment to an Azure App Service.

Starting with the simplest first the X-AspNetMvc-Version header can be removed by adding a line to Global.asax.

MvcHandler.DisableMvcResponseHeader = true;

This is pretty clear and is obviously specific to ASP.NET MVC, but it can also be removed using changes to web.config to remove custom headers – this is the approach required to remove the X-Powered-By header.

<system.webServer> <httpProtocol> <customHeaders> <remove name="X-Powered-By"/> <!-- can also remove the MVC version header using this approach --> <!-- <remove name="X-AspNetMvc-Version"/> --> </customHeaders> </httpProtocol> </system.webServer>
<system.webServer> <httpProtocol> <customHeaders> <remove name="X-Powered-By"/> <!-- can also remove the MVC version header using this approach --> <!-- <remove name="X-AspNetMvc-Version"/> --> </customHeaders> </httpProtocol> </system.webServer>

The X-AspNet-Version header can also be removed by amending config, just unset the system.web httpRuntime enableVersionHeader attribute.

<system.web> <httpRuntime enableVersionHeader="false" /> </system.web>

Removing the Server header in Azure App Services is apparently very easy now. The long story is that in IIS 10 an attribute was added to config to allow removal of server header. Now the version of IIS reported by Azure at the time of publishing is 8, however, the removeServerHeader attribute does actually work? Sources indicate that Azure is running a custom version of IIS that is not in line with any OS version.

So cutting to the chase to remove the Server header in Azure App Services you just need to amend web.config to set the removeServerHeader attribute.

<system.webServer> <security> <requestFiltering removeServerHeader="true"> </requestFiltering> </security> </system.webServer>

The existing approaches still work of course, the most often used one was to blank out the Server header using a rewrite rule.

<system.webServer> <rewrite> <outboundRules> <rule name="Blank Server header"> <match serverVariable="RESPONSE_Server" pattern=".+" /> <action type="Rewrite" value=""/> </rule> </outboundRules> </rewrite> </system.webServer>

Problems that can happen - a tendency to spike - or 'to spike or not to spike'

The more businesses and development organisations I work with, the more I see common problem “themes”. In this series of articles I am trying to highlight the problems and show the potential solutions to some of these common symptoms.

I occasionally see a tendency in teams to attempt to spike 'everything' that is unknown to get a completely clear view for estimation or build.  In the extreme this leads to a situation where every feature is spiked to some degree before being built. Clearly wasteful.

Usually this behaviour is found in teams where for some reason either trust is low or there is a fear of failure, so tendency to spike is a protection mechanism to ensure that the team know they can deliver. More often than not in these situations the result of the spike could easily have been committed and delivered, often the result being that the estimation then becomes so low as to be trivial (because the code is in the bag), just with inevitable wasted time.

So the aim is to ensure the team feel willing to work with some unknowns - or are willing to take some risk into build. That they aren't aiming to know everything before they commit to building!

So ask if the spike is really necessary - what is the risk that means we don't think we can commit? If the risk is simply that "we can't be 100% sure we can deliver" or "we have never done that before" then you don't need a spike. If the risk is that "the world may implode" or "it may cause irreparable damage to our fragile ecosystem ultimately leading to the extinction of the human race" - then you may want to consider if a spike will actually help reduce the risk.

msdeploy - more than I thought

I have been using msdeploy for years for automating ASP.NET deployments, I always knew it was a clever/cool tool, but I really had no idea of the depth!

I had a bit of an issue very recently whereby I needed to get structure and data from a SQL CE SDF database file (don't ask!!). I thought this was going to burn a morning - obviously my Google fu kicked in and I found a useful article on migrating between database formats. This pointed me to using msdeploy command line, so reading the docs to see what I could do I was surprised to see the depth – just look at al the providers available!

I ended up running a simple command line to extract a full SQL Server DDL/DML script allowing me to create a copy of the database wherever.

msdeploy -verb:sync -source:dbFullSql="Data Source={{ Path to SDF file! }}",sqlCe=true -dest:dbFullSql="{{ Path to output file}}"

Script generated and full SQL DB created in less than five minutes from opening the browser tab!

This is definitely a tool to have in your utility belt…

Problems that can happen - we have always done it that way!

The more businesses and development organisations I work with, the more I see common problem “themes”. In this series of articles I am trying to highlight the problems and show the potential solutions to some of these common symptoms.

When I visit companies to help them with software development I end up asking a lot of questions - often finding myself sounding like an annoying 5 year old child just repeating the word "Why?" in response to every answer! When I get down to responses like "because we have always done it that way" or "that’s' just the way we do it" then I know the way things are being done needs to be reviewed. It's like a smell - if we haven't thought about something we are doing for so long that it becomes second nature, how can we be sure it is still valuable or relevant?

In software development when we are trying to improve something (or to generally be innovative) failures and mistakes are seen as a necessary part of the process of working out what's right. The path to the successful solution is comprised of a series of small mistakes and failures from which we learn, adapting the solution as we go. I often find that development teams neglect applying this practice to the way they build software. If we take to be fact that nobody is perfect, and that things can always be improved - then it's OK to make mistakes and it's OK to get things wrong. What's not OK is to repeat the same mistakes ad infinitum!

I like the metaphor of friction when looking at issues in the development process, friction holding us back and slowing down our delivery vehicle!

Friction can be felt anywhere from the initial idea right through to running in production, and friction may be felt in one part of the process but caused elsewhere. I think it is easiest to spot (and fixing has the greatest gain) in the things we do regularly - like deployment. Teams I have worked with have "got used" to the process of creating builds and deployment taking days. It's obviously never that hard!

No matter how you are building software this is why taking time to "inspect" what we do so that we can make small adjustments toward continuous improvement is so important. Take time to note what you do, how long it takes, what feels productive, what’s frustrating etc. Reflect over this as a team and honestly question and evaluate what you do and how. Don’t be afraid to try and make small changes, measure the effect and adjust as necessary – removing friction makes you faster. Don't just do things "'cos that's what we do" - think and reflect – it *may* still be the right way, but there may be a better way. The world doesn't stand still so nor should we!

Re-factoring unit tests to use automatic mocks

Working as a consultant with lots of different teams helping to promote engineering practices I often hear ‘reasons’ as to why teams and developers don’t or can’t write automated tests. Most developers and teams want to do things that will ultimately make their lives easier, but sometimes hurdles and issues seem too big to overcome, and so they stop trying! One conversation with a particular developer a while back went along the lines of “the tests slow me down”. This intrigued me as personally I find that TDD speeds me up - so I asked a few more questions and paired with him to see his approach. Long story short I found that when building this developer changes his mind and his constructor injected dependencies a lot, and without auto mocking he was changing his mocks and construction more than his code.

Everyone’s approach is different – that’s what makes the world so great right – so when I listened and understood the pain I was able to offer a solution. I think all too often we just say “do this” or “do that” without looking at the reasons people may be struggling…

I will try and demonstrate the change in approach to auto mocking that helped in this case using a massively fictitious (and equally tragic) example.

We have an article service that returns articles based on a string Identifier (presumably to display somewhere). The first test written using NUnit and Moq to read the article looks like so:

1 [Test] 2 public void returns_read_article() 3 { 4 var repository = new Mock<IArticleRepository>(); 5 repository.Setup(r => r.Get(It.IsAny<string>())).Returns(new Article()); 6 7 var service = new ArticleService(repository.Object); 8 9 var article = service.Get(Guid.NewGuid().ToString()); 10 Assert.IsNotNull(article); 11 }

And the simplest ‘production’ code looks like:

1 public class ArticleService 2 { 3 private readonly IArticleRepository _repository; 4 5 public ArticleService(IArticleRepository repository) 6 { 7 _repository = repository; 8 } 9 10 public Article Get(string articleId) 11 { 12 return _repository.Get(articleId); 13 } 14 }

I told you it was a tragic example right!

So the issue this developer had was that whenever he changed his mind and needed to introduce another dependency, all of his tests need to be changed (OK in this case it’s one test, but if you have 20 tests that’s a bit more work). Let’s simulate the change with another tragic example – let’s say we have been asked to change the system to record the popularity of articles – we decide that the easiest way is to change our ArticleService to use another dependency to record the request of read of the article.

Obviously as soon as we add another dependency the test code will need to be change too. So what are our choices – 1. give up 2. Put up with it and re-factor all of the tests 3. try and find a way to focus our use of mocked dependencies. 1 and 2 have already been tried! So lets try 3!

How about we automatically create Moq’s for the service – like when we use a DI container, just injecting Moq’s that we can control. Most test/mocking frameworks already have a capability – they aren’t too tricky to build, but as we are using Moq how about we NuGet install and use AutoMoq.

1 [Test] 2 public void returns_read_article() 3 { 4 AutoMoqer moqer = new AutoMoqer(); 5 moqer.GetMock<IArticleRepository>().Setup(r => r.Get(It.IsAny<string>())).Returns(new Article()); 6 7 var service = moqer.Create<ArticleService>(); 8 9 var article = service.Get(Guid.NewGuid().ToString()); 10 Assert.IsNotNull(article); 11 }

Simple changes – we new up an AutoMoqer tell it to use a Moq with the same Setup for our repository then create our service using the Moqer – tests run green. Now we can add our test for the new feature:

1 [Test] 2 public void records_article_read() 3 { 4 AutoMoqer moqer = new AutoMoqer(); 5 moqer.GetMock<IArticlePopularityManager>().Setup(r => r.RecordRead(It.IsAny<string>())).Verifiable(); 6 7 var service = moqer.Create<ArticleService>(); 8 9 service.Get(Guid.NewGuid().ToString()); 10 moqer.GetMock<IArticlePopularityManager>().Verify(); 11 }

This time we only deal with our new popularity manager (I know tragic!) – obviously the test fails – so we step by step change the implementation until we get green tests and we end up with:

1 public class ArticleService 2 { 3 private readonly IArticleRepository _repository; 4 private readonly IArticlePopularityManager _articlePopularityManager; 5 6 public ArticleService(IArticleRepository repository, 7 IArticlePopularityManager articlePopularityManager) 8 { 9 _repository = repository; 10 _articlePopularityManager = articlePopularityManager; 11 } 12 13 public Article Get(string articleId) 14 { 15 _articlePopularityManager.RecordRead(articleId); 16 return _repository.Get(articleId); 17 } 18 } 19

Our existing test (apart from our re-factor to auto mock) was untouched and still runs green – so we can now amend dependencies to our hearts content only affecting tests relying on those dependencies. So our developer is happy that he can write tests without this issue slowing him down.

Really simple. Before you start thinking this is still a bit kek with all the duplication ‘n stuff – go and have a look at the AutoMoqTestFixture in the AutoMoq.Helpers namespace, then re-factor again!

Aspects of logging - love and hate

It's fair to say I have a love hate relationship with Logging. I love when logging can help identify the issue - when it's enough but not too much! But I hate when logging code obfuscates the reason of the code. Go on just think back to all those code bases you have seen where littered with Log.Write statements that were clearly put in as the result of a very pressured debug session - not that you would have ever done this of course...

One of the simple things I like to do early on in projects is build in a capability to 'configurably' log method entry and exit within my code base. With parameter details this is more often than not enough for those times when your 'actual' logging isn't giving you enough to diagnose the issue. This allows me to essentially ignore logging at the start of development, as I can use this approach to help diagnose, and steer my 'actual' logging to make it targeted and useful rather than overly verbose and redundant!

I have blogged about this approachbefore (apparently quite a while ago) - but as there have been some significant breaking changes to Interception in Unity v 3.5 and the Enterprise Library v 6 logging application block, I thought a refresh was in order.

This approach is obviously possible and probably just as easy with other containers and logging tools - so there really is no excuse for random Log.Write statements everywhere!

So lets start from scratch - first we are going to add the NuGet package Unity.Interception which will also bring in the Unity package. We create a container and register a simple test interface and implementation.

UnityContainer container = new UnityContainer();
container.RegisterType<ICommand, TestCommand>(); 

We resolve that and execute our test command:

ICommand command = container.Resolve<ICommand>();
command.Execute(); 

Simple - so now we have a container we can look to add the configurable logging capability we want.
We are going to use Enterprise Library for logging and also the configurable policy - so add EnterpriseLibrary.Logging and
EnterpriseLibrary.PolicyInjection packages.

Enterprise Libarary logging needs to be configured, for flexibility I tend to use the Xml configuration approach using the config tool that comes with the full binary download, but it can also be done using a fluent interface. There is plenty of documentation on getting started with the logging application block. Because in this simple approach we are going to use the static LogWriter we need to bootstrap logging after we have configured it by calling:

Logger.SetLogWriter(new LogWriterFactory().Create()); 

Then we need to change the creation of the container to enable the Interception extension:

UnityContainer container = new UnityContainer();
container.AddNewExtension<Interception>();
container.LoadConfiguration(); 

Note that we are loading our external configuration LoadConfiguration - which at this point it will fail as we haven't yet sorted the Unity config (be patient).

We also need to change our registrations to add our interception behaviour - we are just using a simple interface interceptor here as our code will be all interface driven!

container.RegisterType<ICommand, TestCommand>(
	new InterceptionBehavior<PolicyInjectionBehavior>(),
         new Interceptor<InterfaceInterceptor>()); 

Obviously this could make registrations a bit verbose - so you can push this it into an extension method.

The configuration is the only fiddly bit. Unity has a fluent config api so we could add:

            container.Configure<Interception>()
              .AddPolicy("logging")
              .AddMatchingRule<AssemblyMatchingRule>(
                new InjectionConstructor(
                  new InjectionParameter("Library")))
              .AddCallHandler<LogCallHandler>(
                new ContainerControlledLifetimeManager(),
                new InjectionConstructor(
                  9001, true, true,
                  "start", "finish", true, false, true, 10, 1)); 

after the container construction. This configuration adds a policy (called "logging") with an assembly matching rule that intercepts everything from assembly "Library" and uses the LogCallHander (from the policy injection block) to log before and after (check out the constructor of LogCallHandler to see what all those bools and ints do!). Useful, but ideally we want to be able to drop the policy in without code change.

So using UNity design time configuration - we create the following Unity config section:

<unity xmlns="http://schemas.microsoft.com/practices/2010/unity">
    <sectionExtension type="Microsoft.Practices.Unity.InterceptionExtension.Configuration.InterceptionConfigurationExtension, Microsoft.Practices.Unity.Interception.Configuration" /> 

    <container>
      <extension type="Interception" />
      <interception>
        <policy name="logging">
          <matchingRule name="library" type="AssemblyMatchingRule">
            <constructor>
              <param name="assemblyName" value="Library" />
            </constructor>
          </matchingRule>
          <callHandler name="LogHandler" type="Microsoft.Practices.EnterpriseLibrary.Logging.PolicyInjection.LogCallHandler,Microsoft.Practices.EnterpriseLibrary.PolicyInjection">
            <constructor>
              <param name="eventId" value="9001" />
              <param name="logBeforeCall" value="true" />
              <param name="logAfterCall" value="true" />
              <param name="beforeMessage" value="Before" />
              <param name="afterMessage" value="After" />
              <param name="includeParameters" value="true" />
              <param name="includeCallStack" value="true" />
              <param name="includeCallTime" value="true" />
              <param name="priority" value="3" />
            </constructor>
          </callHandler>
        </policy>
      </interception>
    </container>
  </unity> 

  don't forget to add the section definition into configSections:

    <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration"/> 

This is doing the same as our fluent configuration - now you can see the named call handler constructor arguments being passed to the LogCallHandler which is applied as part of the "logging" policy with an assembly matching rule. We can then amend the Enterprise Library logging config to allow capture of parameters.

Now we can simply change the config file to remove the logging policy totally, or introduce more specific matchingRules to target the area of the code base that we haven't logged properly yet!

Problems that can happen - they never deliver!

The more businesses and development organisations I work with, the more I see common problem “themes”. In this series of articles I am trying to highlight the problems and show the potential solutions to some of these common symptoms.

“The team can’t deliver anything!”

“They are always late!”

I usually hear this as a general perception from those outside of the development team or those not involved in the day to day delivery of software.

Perception is reality- so there is a problem. What can we do.

Clarify goals and vision

If the development team can list *loads* of stuff they have done, then it may be that the delivery focus is off – so stuff is going out the door, but it’s not what the business need. Ensure the development goals for the deliveries are clear, valuable, understood and agreed by everyone not just the team. Whilst the development team may care that we are “re-factoring to remove the anti-pattern generic repositories to remove the leaky abstraction” (or whatever) – you know that means nothing to the guy selling it! Make sure the goals and vision mean something.

Advertise

It may be that stuff is actually getting out the door, but no one knows. Obviously you need to make sure people know the team have solved this problem or added that feature. This becomes exponentially easier once you have clarified your goals and vision, ensuring they are aligned with business value and understood by all. In fact ultimately if you can aim to measure progress using delivery of goals and value then advertisement is built in!

Do less to do more

When the team are actually really struggling to ship stuff that needs a bit more investigation. One of the reasons I find often is that teams and even organisations trying to do too many things concurrently. If the team can list 50 projects that they are working on, chances are they aren’t going to deliver any of them to time as they are spending most of their time multiplexing. Kanban tells us to limit the amount of work in play to maximise the output. This works in small scale for the team’s day to day work or in larger scale for the organisational release planning. For me this is all about focus, start with one thing - the most important – complete it, then move onto the next. Only attempt to do more stuff in parallel when you have learned how to deliver successfully – even then be mindful not to fall back into the trap of trying to do too much…

When TDD bites back!

Dramatic title I know – this is really a post about lessons learned and trusting your TDD instincts (your TDD spidey sense!).

First of all I need to explain some history. In the distant past I was leading part of a programme for a customer with multiple teams building software iteratively in an agile manner. There were two teams building in our company - as ever we were all trying to do things to the best of our ability and knowledge. During one of the sprint reviews that were part of our development heartbeat I noted that one part of the system had been built in a, shall we say less than optimal way, with unit tests that were, er, fragile! In essence it was one big lump of code (pretty unreadable), with tests that had clearly been engineered after the fact – testing everything from the top level as large scale integration tests, very, very slowly. My TDD spidey senses were definitely tingling!

But. It was a relatively small bit of code, it worked to spec, and we weren’t exactly running perfectly to schedule. Despite knowing it was an impact to us all as the large tests were slowing down our CI - I was persuaded that we take it on as technical debt, and funnily enough the debt was never paid back!

Now returning to the not so distant past - I returned to the customer to do some consultancy work and spoke to them about the code they had inherited. Largely they had got on really well with the code base that was handed over, with one obvious exception! Any guesses? Talking to them, they had needed to make changes, attempted to but struggled, describing the code as unreadable and unmaintainable – with the tests being more hindrance than help. Apparently after spending a couple of sprints trying to re-factor, they decided they needed to rip and replace!

As if I didn’t already have enough reasons to trust my TDD spidey sense…

So perhaps a better title would have been something like “When TDD is done badly it can seriously impact the maintainability of your code base” or “Just saying you do TDD doesn’t mean your code will be any good” – no where near as dramatic though!