Debug dynamic HTML in Chrome

In a recent Angular application Chrome helped me identifying how to apply style to a third party component over which we had no control. A popup was being added to the HTML and as it was removed on any action it wasn’t obvious how I could find the dynamic content to work out how to style – enter Chrome dev tools!

With developer tools open (F12) and the dynamic element displayed press F8 which will break script execution and freeze the DOM – then you can use CTRL+SHIFT+C and use the arrow keys to navigate through the elements to identify the required element.

Find Oracle priv’s GRANTed to a table

One I can’t seem to remember (obviously until I write it down here). When trying to find out the privilidges that have been granted to a table in Oracle use the DBA_TAB_PRIVS view:

SELECT * FROM DBA_TAB_PRIVS where table_name = 'TABLE' and owner = 'SCHEMA'

Oracle EF mapping NUMBER(n, n) data types

In a recent project we built a template based code generator to help us rapidly build out OData ASP.NET Api services over an existing Oracle database using Entity Framework. The first cut used a default mapping of decimal for each Oracle NUMBER(n, n) column found, and we started getting messages like the following:

Member Mapping specified is not valid. The type 'Edm.Decimal[Nullable=False,DefaultValue=,Precision=6,Scale=0]' of member 'Property' in type 'TypeName' is not compatible with 'OracleEFProvider.number[Nullable=False,DefaultValue=,Precision=6,Scale=0]' of member 'Column' in type 'CodeFirstDatabaseSchema.Table'.\r\n(469,12) : error 2019: Member Mapping specified is not valid. 

Clearly just using the basic mapping was not correct!

In the Oracle data provider documentation it describes how the NUMBER(n, 0) data types map to various integer data types:

https://docs.oracle.com/database/121/ODPNT/entityEDMmapping.htm#ODPNT8275

So it was easy to work out how we could use the scale and precision values to correct map the various NUMBER data types. What was pretty interesting to discover, however, is that you can change these mappings through config – the example given in the docs being to take a NUMBER(1, 0) column direct to a bool data type. Useful.

https://docs.oracle.com/database/121/ODPNT/entityDataTypeMapping.htm#ODPNT8300

Angular binding boolean (k)nots!

It seems like a trivial task. In an Angular app change a checkbox that manages the Deleted ‘soft delete’ property to invert and show as an Active property.

So the naive approach is just to try and change the two way model binding to ! the boolean value – something like:

<input name="IsDeleted" type="checkbox" [(ngModel)]="!input.IsDeleted">

Frighteningly this very nearly works! But you will find the behaviour of the model set is not correct, requiring two clicks on the check box – to be fair the docs do say to only set a data-bound property. So what do you do?

To stick with the two way data-binding syntax you could add getter/setter accessors on the component itself:

get isActive() { return !this.input.IsDeleted; } set isActive(newValue: boolean) { this.input.IsDeleted = !newValue; }

then use as the target of the simple two way data-bind expression:

<input name="IsDeleted" type="checkbox" [(ngModel)]="isActive">

Personally I think a more elegant is to use the one way binding to display the inverted value and the event syntax (for the checkbox the change event) to set the inverted value:

<input name="IsDeleted" type="checkbox" [ngModel]="!input.IsDeleted" (change)="input.IsDeleted=!$event.target.checked">

Angular bind ! of Boolean property

It seems like a trivial task. In an Angular app change a checkbox that manages the Deleted ‘soft delete’ property to invert and show as an Active property.

So the naive approach is just to try and change the two way model binding to ! the boolean value – something like:

<input name="IsDeleted" type="checkbox" [(ngModel)]="!input.IsDeleted">
<input name="IsDeleted" type="checkbox" class="form-control" [(ngModel)]="!input.IsDeleted" [disabled]="!canWrite">

Frighteningly this very nearly works! But you will find the behaviour of the model set is not correct, requiring two clicks on the check box – to be fair the docs do say to only set a data-bound property. So what do you do?

To stick with the two way data-binding syntax you could add getter/setter accessors on the component itself:

get isActive() { return !this.input.IsDeleted; } set isActive(newValue: boolean) { this.input.IsDeleted = !newValue; }

then use these as the target of the two way data-bind expression:

<input name="IsDeleted" type="checkbox" [(ngModel)]="isActive">

Personally I think a more elegant is to use the one way binding to display the inverted value and the event syntax (for the checkbox the change event) the set the inverted value:

<input name="IsDeleted" type="checkbox" class="form-control" [ngModel]="!input.IsDeleted" (change)="input.IsDeleted=!$event.target.checked" [disabled]="!canWrite">
<input name="IsDeleted" type="checkbox" [ngModel]="!input.IsDeleted" (change)="input.IsDeleted=!$event.target.checked">

Subtle Angular compile issue

Stumbled across a subtle issue with an Angular 4 build today that lead to some investigation - so capturing here as a reminder. The CI build was failing, but local build working without issue – the complaint was the accessibility of a property on a class.

… .component.html (26,71): Property 'demoProperty' is private and only accessible within class 'DemoComponent'.

After investigation the difference between the local build and CI server causing this mismatch was the addition of the --prod flag. We had applied this flag as it enables AOT (https://angular.io/guide/aot-compiler) which (along with other benefits described) improves performance of the deployed app. Reading through the AOT docs this amended compilation may fail when the JIT works successfully for several documented reasons; the one tripping us up here is that all data bound members must be public – so my default desire to keep the accessibility on a property as low as possible was the cause!

So now when running the build locally we always apply either the --prod flag or the --aot flag - you can also use the --aot flag on ng serve.

Properties in log4net config

Another little log4net gem! You are probably aware of the use of property in conversion patterns in log4net using the PatternLayout, but did you know you could use them in configuration? Well I didn’t..

My goal was to push a rolling log file path file into the config file, so that we could avoid having to maintain multiple config files across services. So choosing the global context for properties (there are numerous contexts) I just added the file path before calling Configure in my case using the XmlConfigurator:

1 log4net.GlobalContext.Properties["LogFilePath"] = logFilePath;

In config I can reference this named property using the conversion pattern syntax in the file value:

1 ... 2   <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> 3     <file type="log4net.Util.PatternString" value="%property{LogFilePath}"/> 4 ... 5

The key part to note is the type of “log4net.Util.PatternString” associated to the file element allowing the conversion syntax to be interpolated – pretty sweet.

Use a custom log4net PatternConverter via config

We should be now all know how configurable and extensible log4net is – we have been using for years after all. Recently though I struggled to find out how to use a custom PatternConverter in a pattern layout using configuration.

So with a simple PatternConverter such as:

1 public class TestPatternConverter : PatternConverter 2 { 3 protected override void Convert(TextWriter writer, object state) 4 { 5 writer.Write(“test”); 6 } 7 } 8

Which you can blatantly see does do much other than write out “test” – but this is just to demonstrate the concept. The documentation describes adding the converter in code using the AddConverter operation – but no mention of how to do that in config?

1 ... 2 <layout type="log4net.Layout.PatternLayout"> 3       <conversionPattern value="%writeTest %message" /> 4       <converter> 5         <name value="writeTest" /> 6         <type value="Demo.Logging.TestPatternConverter, Demo.Logging" /> 7       </converter>    8     </layout> 9 ...

Pretty straight forward really – within the PatternLayout add a converter tag naming it and offering the qualified type name. You can then reference the named item just as you would any other pattern in your layout. So helpfully here we would get “test” written before the log message! Obviously it is possible to imagine more useful scenarios…

Query App Insights customEvent custom dimensions

On a recent project using MassTransit to produce and event based data exchange system, for tracing I thought it would be really sensible to add the serialized message to app insights custom events – turned out to be really helpful, making tracing so much easier.

I already had a mass transit message IConsumeObserver observer to log any exceptions so adding the consumed message with its message content was relatively simple. Ultimately the TelemetryClient TrackEvent operation accepts an IDictionary<string, string> to record custom dimensions so all that was required was a dimensions builder:

1 public interface IBuildCustomDimensions 2 { 3 Dictionary<string, string> Build<T>(T message); 4 } 5 6 public class CustomDimensionsBuilder : IBuildCustomDimensions 7 { 8 public Dictionary<string, string> Build<T>(T message) 9 { 10 return CreateDefaultMessageProperties(message); 11 } 12 13 private Dictionary<string, string> CreateDefaultMessageProperties<T>(T message) 14 { 15 if (!(message is ISessionMessage)) return null; 16 17 var session = message as ISessionMessage; 18 return new Dictionary<string, string>() 19 { 20 { "SessionId", session.SessionId.ToString() }, 21 { "Message", JsonConvert.SerializeObject(message) } 22 }; 23 } 24 } 25

Pretty simple, with our own session Id as a dimension for easy querying along with the JSON serialized the message as another dimension.

When logged the custom dimensions result was displaying as expected (ignoring the complete lack of imagination in the made up message of course):

1 { 2 "SessionId":"4bd715d5-bfbc-47c2-a10a-2738f5795627", 3 "Message":"{\"Value1\":123, \"Value2\":456}" 4 }

The beauty part of this along with our session identifier meant we could easily trace all messages through for a change. But the power of App Insights querying meant we could easily also read and query using message content – just remember to use “tostring” on the serialized message before parsing – like so:

1 customEvents 2 | sort by timestamp desc 3 | extend Value1 = parsejson(tostring(customDimensions.Message)).Value1 4 | limit 5 5

Azure PowerShell setup

On a recent project we had automated the creation of environments on Azure using ARM templates, and had wrapped with a few quite basic PowerShell scripts for use in development/testing and within continuous integration/delivery. Some engineers were reporting issues with execution of the scripts – issues such as syntax errors that were pointing to version issues. It turned out that it was actually quite easy for an engineer who randomly installs stuff to play with (and yes I do mean me) to have multiple versions installed and competing!

Firstly to get a view of the current state within PowerShell you can use:

1 Get-Module -ListAvailable Azure  2
1 Get-Module -ListAvailable Azure 

This should output something like:

1 Directory: C:\Program Files\WindowsPowerShell\Modules 2 3 4 ModuleType Version Name ExportedCommands 5 ---------- ------- ---- ---------------- 6 Script 3.1.0 Azure {Get-AzureAutomationCertificate, Get-AzureAutomationConnec...

Azure PowerShell uses semantic versioning as detailed in https://docs.microsoft.com/en-gb/azure/powershell-install-configure and anything less than 2.1.0 is not designed to run side by side. If you find yourself with a version below 2.1.0 uninstall the "Microsoft Azure Powershell" feature using "Programs and Features".

1 Directory: C:\Program Files\WindowsPowerShell\Modules 2 3 4 ModuleType Version Name ExportedCommands 5 ---------- ------- ---- ---------------- 6 Script 3.1.0 Azure {Get-AzureAutomationCertificate, Get-AzureAutomationConnec...

Then to install the latest version the recommended method is to use the PowerShell gallery, you can find the latest version using:

1 Find-Module AzureRM

Then install using:

1 Install-Module Azure –AllowClobber

Then you can identify the version of everything Azure installed using:

1 Get-Module -ListAvailable Azure*