Angular binding boolean (k)nots!

It seems like a trivial task. In an Angular app change a checkbox that manages the Deleted ‘soft delete’ property to invert and show as an Active property.

So the naive approach is just to try and change the two way model binding to ! the boolean value – something like:

<input name="IsDeleted" type="checkbox" [(ngModel)]="!input.IsDeleted">

Frighteningly this very nearly works! But you will find the behaviour of the model set is not correct, requiring two clicks on the check box – to be fair the docs do say to only set a data-bound property. So what do you do?

To stick with the two way data-binding syntax you could add getter/setter accessors on the component itself:

get isActive() { return !this.input.IsDeleted; } set isActive(newValue: boolean) { this.input.IsDeleted = !newValue; }

then use as the target of the simple two way data-bind expression:

<input name="IsDeleted" type="checkbox" [(ngModel)]="isActive">

Personally I think a more elegant is to use the one way binding to display the inverted value and the event syntax (for the checkbox the change event) to set the inverted value:

<input name="IsDeleted" type="checkbox" [ngModel]="!input.IsDeleted" (change)="input.IsDeleted=!$">

Subtle Angular compile issue

Stumbled across a subtle issue with an Angular 4 build today that lead to some investigation - so capturing here as a reminder. The CI build was failing, but local build working without issue – the complaint was the accessibility of a property on a class.

… .component.html (26,71): Property 'demoProperty' is private and only accessible within class 'DemoComponent'.

After investigation the difference between the local build and CI server causing this mismatch was the addition of the --prod flag. We had applied this flag as it enables AOT ( which (along with other benefits described) improves performance of the deployed app. Reading through the AOT docs this amended compilation may fail when the JIT works successfully for several documented reasons; the one tripping us up here is that all data bound members must be public – so my default desire to keep the accessibility on a property as low as possible was the cause!

So now when running the build locally we always apply either the --prod flag or the --aot flag - you can also use the --aot flag on ng serve.

Properties in log4net config

Another little log4net gem! You are probably aware of the use of property in conversion patterns in log4net using the PatternLayout, but did you know you could use them in configuration? Well I didn’t..

My goal was to push a rolling log file path file into the config file, so that we could avoid having to maintain multiple config files across services. So choosing the global context for properties (there are numerous contexts) I just added the file path before calling Configure in my case using the XmlConfigurator:

1 log4net.GlobalContext.Properties["LogFilePath"] = logFilePath;

In config I can reference this named property using the conversion pattern syntax in the file value:

1 ... 2   <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> 3     <file type="log4net.Util.PatternString" value="%property{LogFilePath}"/> 4 ... 5

The key part to note is the type of “log4net.Util.PatternString” associated to the file element allowing the conversion syntax to be interpolated – pretty sweet.

Use a custom log4net PatternConverter via config

We should be now all know how configurable and extensible log4net is – we have been using for years after all. Recently though I struggled to find out how to use a custom PatternConverter in a pattern layout using configuration.

So with a simple PatternConverter such as:

1 public class TestPatternConverter : PatternConverter 2 { 3 protected override void Convert(TextWriter writer, object state) 4 { 5 writer.Write(“test”); 6 } 7 } 8

Which you can blatantly see does do much other than write out “test” – but this is just to demonstrate the concept. The documentation describes adding the converter in code using the AddConverter operation – but no mention of how to do that in config?

1 ... 2 <layout type="log4net.Layout.PatternLayout"> 3       <conversionPattern value="%writeTest %message" /> 4       <converter> 5         <name value="writeTest" /> 6         <type value="Demo.Logging.TestPatternConverter, Demo.Logging" /> 7       </converter>    8     </layout> 9 ...

Pretty straight forward really – within the PatternLayout add a converter tag naming it and offering the qualified type name. You can then reference the named item just as you would any other pattern in your layout. So helpfully here we would get “test” written before the log message! Obviously it is possible to imagine more useful scenarios…

Query App Insights customEvent custom dimensions

On a recent project using MassTransit to produce and event based data exchange system, for tracing I thought it would be really sensible to add the serialized message to app insights custom events – turned out to be really helpful, making tracing so much easier.

I already had a mass transit message IConsumeObserver observer to log any exceptions so adding the consumed message with its message content was relatively simple. Ultimately the TelemetryClient TrackEvent operation accepts an IDictionary<string, string> to record custom dimensions so all that was required was a dimensions builder:

1 public interface IBuildCustomDimensions 2 { 3 Dictionary<string, string> Build<T>(T message); 4 } 5 6 public class CustomDimensionsBuilder : IBuildCustomDimensions 7 { 8 public Dictionary<string, string> Build<T>(T message) 9 { 10 return CreateDefaultMessageProperties(message); 11 } 12 13 private Dictionary<string, string> CreateDefaultMessageProperties<T>(T message) 14 { 15 if (!(message is ISessionMessage)) return null; 16 17 var session = message as ISessionMessage; 18 return new Dictionary<string, string>() 19 { 20 { "SessionId", session.SessionId.ToString() }, 21 { "Message", JsonConvert.SerializeObject(message) } 22 }; 23 } 24 } 25

Pretty simple, with our own session Id as a dimension for easy querying along with the JSON serialized the message as another dimension.

When logged the custom dimensions result was displaying as expected (ignoring the complete lack of imagination in the made up message of course):

1 { 2 "SessionId":"4bd715d5-bfbc-47c2-a10a-2738f5795627", 3 "Message":"{\"Value1\":123, \"Value2\":456}" 4 }

The beauty part of this along with our session identifier meant we could easily trace all messages through for a change. But the power of App Insights querying meant we could easily also read and query using message content – just remember to use “tostring” on the serialized message before parsing – like so:

1 customEvents 2 | sort by timestamp desc 3 | extend Value1 = parsejson(tostring(customDimensions.Message)).Value1 4 | limit 5 5

Azure PowerShell setup

On a recent project we had automated the creation of environments on Azure using ARM templates, and had wrapped with a few quite basic PowerShell scripts for use in development/testing and within continuous integration/delivery. Some engineers were reporting issues with execution of the scripts – issues such as syntax errors that were pointing to version issues. It turned out that it was actually quite easy for an engineer who randomly installs stuff to play with (and yes I do mean me) to have multiple versions installed and competing!

Firstly to get a view of the current state within PowerShell you can use:

1 Get-Module -ListAvailable Azure  2
1 Get-Module -ListAvailable Azure 

This should output something like:

1 Directory: C:\Program Files\WindowsPowerShell\Modules 2 3 4 ModuleType Version Name ExportedCommands 5 ---------- ------- ---- ---------------- 6 Script 3.1.0 Azure {Get-AzureAutomationCertificate, Get-AzureAutomationConnec...

Azure PowerShell uses semantic versioning as detailed in and anything less than 2.1.0 is not designed to run side by side. If you find yourself with a version below 2.1.0 uninstall the "Microsoft Azure Powershell" feature using "Programs and Features".

1 Directory: C:\Program Files\WindowsPowerShell\Modules 2 3 4 ModuleType Version Name ExportedCommands 5 ---------- ------- ---- ---------------- 6 Script 3.1.0 Azure {Get-AzureAutomationCertificate, Get-AzureAutomationConnec...

Then to install the latest version the recommended method is to use the PowerShell gallery, you can find the latest version using:

1 Find-Module AzureRM

Then install using:

1 Install-Module Azure –AllowClobber

Then you can identify the version of everything Azure installed using:

1 Get-Module -ListAvailable Azure*

App Insights querying counts

I have been using (and loving) App Insights a lot recently and one of the things that have really impressed me is the capability and power of the queries when analysing usage patterns. One thing that caught me out however, was counting the number of requests when the sampling was active – in my case when the site was getting a lot of traffic during load testing.

Creating a simple chart showing number of requests per minute over the last hour using:

1 requests 2 | where timestamp > ago(1h) 3 | summarize count() by bin(timestamp, 1m) 4 | render timechart 5
1 requests 2 | where timestamp > ago(1h) 3 | summarize count() by bin(timestamp, 1m) 4 | render timechart

Was showing far less than anticipated after my load tests?


Turns out (if you actually read the docs) this is directly called out:

So remember to use the sum(itemCount) approach:

1 requests 2 | where timestamp > ago(1h) 3 | summarize sum(itemCount) by bin(timestamp, 1m) 4 | render timechart


Fairly significant difference!

1 requests 2 | where timestamp > ago(1h) 3 | summarize count() by bin(timestamp, 1m) 4 | render timechart 5

Treat solution architecture like code

We have all seen code bases that are hard to read and follow, long methods, repeated code, loads of parameters and all the other smells that literally keep us awake at night (or is that just me?)!

The larger and more complex your system gets, a non-existent or difficult to read architectural view of the system can be as bad for maintainability as a difficult to read code base. The larger your system gets (and the more people you have working on it), you will likely increase maintainability by putting attention into abstract architectural views. These views will give a point of reference allowing the team to see what they are building into, easily assess the impact of change - and perhaps more importantly support the architectural re-factoring that is often neglected.

What form this view takes (beyond the obvious boxes and lines) is very dependent upon the nature of the system. For example with very large systems with complex interactions then I find views showing the dynamic behaviour of the 'deliverables' (or the independently versioned components) gives a high enough abstract view - enough to see the dependencies across important scenarios. Often I use UML collaboration or communication diagrams, for easy explanation of the meaning, but anything that the team can understand will do.

With smaller systems you may need to go below the deliverable to see interactions between sub components. Regardless of the approach I find the best way is, just like your code, to iterate it. Ensure it is valuable to the team and proving useful. Reflect often and if you find questions not getting answered by the abstract views adjust them.

Quieten down noisy HTTP headers

This is not a new article, and blatantly steals from two other extremely helpful sources on this topic namely and The reason for this article is to update the references slightly as I found a few changes during a recent ASP.NET MVC deployment to an Azure App Service.

Starting with the simplest first the X-AspNetMvc-Version header can be removed by adding a line to Global.asax.

MvcHandler.DisableMvcResponseHeader = true;

This is pretty clear and is obviously specific to ASP.NET MVC, but it can also be removed using changes to web.config to remove custom headers – this is the approach required to remove the X-Powered-By header.

<system.webServer> <httpProtocol> <customHeaders> <remove name="X-Powered-By"/> <!-- can also remove the MVC version header using this approach --> <!-- <remove name="X-AspNetMvc-Version"/> --> </customHeaders> </httpProtocol> </system.webServer>
<system.webServer> <httpProtocol> <customHeaders> <remove name="X-Powered-By"/> <!-- can also remove the MVC version header using this approach --> <!-- <remove name="X-AspNetMvc-Version"/> --> </customHeaders> </httpProtocol> </system.webServer>

The X-AspNet-Version header can also be removed by amending config, just unset the system.web httpRuntime enableVersionHeader attribute.

<system.web> <httpRuntime enableVersionHeader="false" /> </system.web>

Removing the Server header in Azure App Services is apparently very easy now. The long story is that in IIS 10 an attribute was added to config to allow removal of server header. Now the version of IIS reported by Azure at the time of publishing is 8, however, the removeServerHeader attribute does actually work? Sources indicate that Azure is running a custom version of IIS that is not in line with any OS version.

So cutting to the chase to remove the Server header in Azure App Services you just need to amend web.config to set the removeServerHeader attribute.

<system.webServer> <security> <requestFiltering removeServerHeader="true"> </requestFiltering> </security> </system.webServer>

The existing approaches still work of course, the most often used one was to blank out the Server header using a rewrite rule.

<system.webServer> <rewrite> <outboundRules> <rule name="Blank Server header"> <match serverVariable="RESPONSE_Server" pattern=".+" /> <action type="Rewrite" value=""/> </rule> </outboundRules> </rewrite> </system.webServer>

Problems that can happen - a tendency to spike - or 'to spike or not to spike'

The more businesses and development organisations I work with, the more I see common problem “themes”. In this series of articles I am trying to highlight the problems and show the potential solutions to some of these common symptoms.

I occasionally see a tendency in teams to attempt to spike 'everything' that is unknown to get a completely clear view for estimation or build.  In the extreme this leads to a situation where every feature is spiked to some degree before being built. Clearly wasteful.

Usually this behaviour is found in teams where for some reason either trust is low or there is a fear of failure, so tendency to spike is a protection mechanism to ensure that the team know they can deliver. More often than not in these situations the result of the spike could easily have been committed and delivered, often the result being that the estimation then becomes so low as to be trivial (because the code is in the bag), just with inevitable wasted time.

So the aim is to ensure the team feel willing to work with some unknowns - or are willing to take some risk into build. That they aren't aiming to know everything before they commit to building!

So ask if the spike is really necessary - what is the risk that means we don't think we can commit? If the risk is simply that "we can't be 100% sure we can deliver" or "we have never done that before" then you don't need a spike. If the risk is that "the world may implode" or "it may cause irreparable damage to our fragile ecosystem ultimately leading to the extinction of the human race" - then you may want to consider if a spike will actually help reduce the risk.