The Things Network HTTP Integration Part10

Assembling the components

After a series of articles exploring how portions of solution could be built

I now had working code for receiving The Things Network(TTN) HTTP integration JSON messages with an Azure Function using an HTTPTrigger. (secured with an APIKey) and then putting them into an Azure Storage Queue for processing. This code was intentionally kept as small and as simple as possible so there was less to go wrong. The required configuration is also minimal.

HTTP Endpoint handler application

In the last couple of posts I had been building an Azure Function with a QueueTrigger to process the uplink messages. The function used custom bindings so that the CloudQueueMessage could be accessed, and load the Azure Storage account plus queue name from configuration. I’m still using classes generated by JSON2CSharp (with minimal modifications) for deserialising the payloads with JSON.Net.

The message processor Azure Function uses a ConcurrentCollection to store AzureDeviceClient objects constructed using the information returned by the Azure Device Provisioning Service(DPS). This is so the DPS doesn’t have to be called for the connection details for every message.(When the Azure function is restarted the dictionary of DeviceClient objects has to be repopulated). If there is a backlog of messages the message processor can process more than a dozen messages concurrently so the telemetry events displayed in an application like Azure IoT Central can arrive out of order.

The solution uses DPS Group Enrollment with Symmetric Key Attestation so Azure IoT Hub devices can be “automagically” created when a message from a new device is processed. The processing code is multi-thread and relies on many error conditions being handled by the Azure Function retry mechanism. After a number of failed retries the messages are moved to a poison queue. Azure Storage Explorer is a good tool for viewing payloads and moving poison messages back to the processing queue.

public static class UplinkMessageProcessor
{
   static readonly ConcurrentDictionary<string, DeviceClient> DeviceClients = new ConcurrentDictionary<string, DeviceClient>();

   [FunctionName("UplinkMessageProcessor")]
   public static async Task Run(
      [QueueTrigger("%UplinkQueueName%", Connection = "AzureStorageConnectionString")]
      CloudQueueMessage cloudQueueMessage, // Used to get CloudQueueMessage.Id for logging
      Microsoft.Azure.WebJobs.ExecutionContext context,
      ILogger log)
   {
      PayloadV5 payloadObect;
      DeviceClient deviceClient = null;
      DeviceProvisioningServiceSettings deviceProvisioningServiceConfig;

      string environmentName = Environment.GetEnvironmentVariable("ENVIRONMENT");

      // Load configuration for DPS. Refactor approach and store securely...
      var configuration = new ConfigurationBuilder()
      .SetBasePath(context.FunctionAppDirectory)
      .AddJsonFile($"appsettings.json")
      .AddJsonFile($"appsettings.{environmentName}.json")
      .AddEnvironmentVariables()
      .Build();

      // Load configuration for DPS. Refactor approach and store securely...
      try
      {
         deviceProvisioningServiceConfig = (DeviceProvisioningServiceSettings)configuration.GetSection("DeviceProvisioningService").Get<DeviceProvisioningServiceSettings>(); ;
      }
      catch (Exception ex)
      {
         log.LogError(ex, $"Configuration loading failed");
         throw;
      }

      // Deserialise uplink message from Azure storage queue
      try
      {
         payloadObect = JsonConvert.DeserializeObject<PayloadV5>(cloudQueueMessage.AsString);
      }
      catch (Exception ex)
      {
         log.LogError(ex, $"MessageID:{cloudQueueMessage.Id} uplink message deserialisation failed");
         throw;
      }

      // Extract the device ID as it's used lots of places
      string registrationID = payloadObect.hardware_serial;

      // Construct the prefix used in all the logging
      string messagePrefix = $"MessageID: {cloudQueueMessage.Id} DeviceID:{registrationID} Counter:{payloadObect.counter} Application ID:{payloadObect.app_id}";
      log.LogInformation($"{messagePrefix} Uplink message device processing start");

      // See if the device has already been provisioned
      if (DeviceClients.TryAdd(registrationID, deviceClient))
      {
         log.LogInformation($"{messagePrefix} Device provisioning start");

         string enrollmentGroupSymmetricKey = deviceProvisioningServiceConfig.EnrollmentGroupSymmetricKeyDefault;

         // figure out if custom mapping for TTN applicationID
         if (deviceProvisioningServiceConfig.ApplicationEnrollmentGroupMapping != null)
        {
            deviceProvisioningServiceConfig.ApplicationEnrollmentGroupMapping.GetValueOrDefault(payloadObect.app_id, deviceProvisioningServiceConfig.EnrollmentGroupSymmetricKeyDefault);
         }

         // Do DPS magic first time device seen
         await DeviceRegistration(log, messagePrefix, deviceProvisioningServiceConfig.GlobalDeviceEndpoint, deviceProvisioningServiceConfig.ScopeID, enrollmentGroupSymmetricKey, registrationID);
      }

      // Wait for the Device Provisioning Service to complete on this or other thread
      log.LogInformation($"{messagePrefix} Device provisioning polling start");
      if (!DeviceClients.TryGetValue(registrationID, out deviceClient))
      {
         log.LogError($"{messagePrefix} Device provisioning polling TryGet before while failed");

         throw new ApplicationException($"{messagePrefix} Device provisioning polling TryGet before while failed");
      }

      while (deviceClient == null)
      {
         log.LogInformation($"{messagePrefix} provisioning polling delay");
         await Task.Delay(deviceProvisioningServiceConfig.DeviceProvisioningPollingDelay);

         if (!DeviceClients.TryGetValue(registrationID, out deviceClient))
         {
            log.LogError($"{messagePrefix} Device provisioning polling TryGet while loop failed");

            throw new ApplicationException($"{messagePrefix} Device provisioning polling TryGet while loopfailed");
         }
      }

      // Assemble the JSON payload to send to Azure IoT Hub/Central.
      log.LogInformation($"{messagePrefix} Payload assembly start");
      JObject telemetryEvent = new JObject();
      try
      {
         JObject payloadFields = (JObject)payloadObect.payload_fields;
         telemetryEvent.Add("HardwareSerial", payloadObect.hardware_serial);
         telemetryEvent.Add("Retry", payloadObect.is_retry);
         telemetryEvent.Add("Counter", payloadObect.counter);
         telemetryEvent.Add("DeviceID", payloadObect.dev_id);
         telemetryEvent.Add("ApplicationID", payloadObect.app_id);
         telemetryEvent.Add("Port", payloadObect.port);
         telemetryEvent.Add("PayloadRaw", payloadObect.payload_raw);
         telemetryEvent.Add("ReceivedAt", payloadObect.metadata.time);

         // If the payload has been unpacked in TTN backend add fields to telemetry event payload
         if (payloadFields != null)
         {
            foreach (JProperty child in payloadFields.Children())
            {
               EnumerateChildren(telemetryEvent, child);
            }
         }
      }
      catch (Exception ex)
      {
         if (DeviceClients.TryRemove(registrationID, out deviceClient))
         {
            log.LogWarning($"{messagePrefix} TryRemove payload assembly failed");
         }

         log.LogError(ex, $"{messagePrefix} Payload assembly failed");
         throw;
      }

      // Send the message to Azure IoT Hub/Azure IoT Central
      log.LogInformation($"{messagePrefix} Payload SendEventAsync start");
      try
      {
         using (Message ioTHubmessage = new Message(Encoding.ASCII.GetBytes(JsonConvert.SerializeObject(telemetryEvent))))
         {
            // Ensure the displayed time is the acquired time rather than the uploaded time. esp. importan for messages that end up in poison queue
            ioTHubmessage.Properties.Add("iothub-creation-time-utc", payloadObect.metadata.time.ToString("s", CultureInfo.InvariantCulture));
            await deviceClient.SendEventAsync(ioTHubmessage);
         }
      }
      catch (Exception ex)
      {
         if (DeviceClients.TryRemove(registrationID, out deviceClient))
         {
            log.LogWarning($"{messagePrefix} TryRemove SendEventAsync failed");
         }

         log.LogError(ex, $"{messagePrefix} SendEventAsync failed");
         throw;
      }

   log.LogInformation($"{messagePrefix} Uplink message device processing completed");
   }
}

There is also support for using a specific GroupEnrollment based on the application_id in the uplink message payload.

"DeviceProvisioningService": {
      "GlobalDeviceEndpoint": "global.azure-devices-provisioning.net",
      "ScopeID": "",
      "EnrollmentGroupSymmetricKeyDefault": "TopSecretKey",
      "DeviceProvisioningPollingDelay": 500,
      "ApplicationEnrollmentGroupMapping": {
         "Application1": "TopSecretKey1",
         "Application2": "TopSecretKey2"
      }
   }

In addition to the appsettings.json there is configuration for application insights, uplink message queue name and Azure Storage connection strings. The “Environment” setting is important as it specifies what appsettings.json file should be used if code is being debugged etc..

TTN Integration uplink message processor configuration

The deployed solution application consists of Azure IoTHub and DPS instances. There are two Azure functions, one for putting the messages from the TTN into a queue the other is for processing them. The Azure Functions are hosted in an Azure AppService plan.

Azure solution deployment

An Azure Storage account is used for the queue and Azure Function synchronisation information and Azure Application Insights is used to monitor the solution.

NLog and Application Insights Revisited

Just a few small changes to my NLog sample logging to Azure Application Insights.

I modified the application so I could provide the InstrumentationKey via the command line or the ApplicationInsights.Config file.(I have a minimalist config for this sample)

namespace devMobile.Azure.ApplicationInsightsNLogClient
{
   class Program
   {
      private static Logger log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType.ToString());

      static void Main(string[] args)
      {
         if ((args.Length != 0) && (args.Length != 1))
         {
            Console.WriteLine("Usage ApplicationInsightsNLogClient");
            Console.WriteLine("      ApplicationInsightsNLogClient <instrumentationKey>");
            return;
         }

         if (args.Length == 1)
         {
            TelemetryConfiguration.Active.InstrumentationKey = args[0];
         }

         log.Trace("This is an nLog Trace message");
         log.Debug("This is an nLog Debug message");
         log.Info("This is an nLog Info message");
         log.Warn("This is an nLog Warning message");
         log.Error("This is an nLog Error message");
         log.Fatal("This is an nLog Fatal message");

         TelemetryConfiguration.Active.TelemetryChannel.Flush();

			Console.WriteLine("Press <enter> to exit>");
			Console.ReadLine();
		}
	}
}

Code for my sample console application is here.

Azure Function Log4Net configuration Revisted

In a previous post I showed how I configured Apache Log4Net and Azure Application Insights to work with an Azure Function, this is the code updated to .Net Core V3.1.

With the different versions of the libraries involved (Early April 2020) this was what I found worked for me so YMMV.

Initially the logging to Application Insights wasn’t working even though it was configured in the ApplicationIngisghts.config file. After some experimentation I found setting the APPINSIGHTS_INSTRUMENTATIONKEY environment variable was the only way I could get it to work.

namespace ApplicationInsightsAzureFunctionLog4NetClient
{
	using System;
	using System.IO;
	using System.Reflection;
	using log4net;
	using log4net.Config;
	using Microsoft.ApplicationInsights;
	using Microsoft.ApplicationInsights.Extensibility;
	using Microsoft.Azure.WebJobs;

	public static class ApplicationInsightsTimer
	{
		[FunctionName("ApplicationInsightsTimerLog4Net")]
		public static void Run([TimerTrigger("0 */1 * * * *")]TimerInfo myTimer, ExecutionContext executionContext)
		{
         ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

         using (TelemetryConfiguration telemetryConfiguration = TelemetryConfiguration.CreateDefault())
         {
            TelemetryClient telemetryClient = new TelemetryClient(telemetryConfiguration);
 
            var logRepository = LogManager.GetRepository(Assembly.GetEntryAssembly());
            XmlConfigurator.Configure(logRepository, new FileInfo(Path.Combine(executionContext.FunctionAppDirectory, "log4net.config")));

            log.Debug("This is a Log4Net Debug message");
            log.Info("This is a Log4Net Info message");
            log.Warn("This is a Log4Net Warning message");
            log.Error("This is a Log4Net Error message");
            log.Fatal("This is a Log4Net Fatal message");

            telemetryClient.Flush();
         }
      }
   }
}

I did notice that there were a number of exceptions which warrant further investigation.

'func.exe' (CoreCLR: clrhost): Loaded 'C:\Users\BrynLewis\source\repos\AzureApplicationInsightsClients\ApplicationInsightsAzureFunctionLog4NetClient\bin\Debug\netcoreapp3.1\bin\log4net.dll'. 
Exception thrown: 'System.IO.FileNotFoundException' in System.Private.CoreLib.dll
Exception thrown: 'System.IO.FileNotFoundException' in System.Private.CoreLib.dll
Exception thrown: 'System.IO.FileNotFoundException' in System.Private.CoreLib.dll
Exception thrown: 'System.IO.FileNotFoundException' in System.Private.CoreLib.dll
Exception thrown: 'System.IO.FileNotFoundException' in System.Private.CoreLib.dll
Exception thrown: 'System.IO.FileNotFoundException' in System.Private.CoreLib.dll
'func.exe' (CoreCLR: clrhost): Loaded 'C:\Users\BrynLewis\AppData\Local\AzureFunctionsTools\Releases\2.47.1\cli_x64\System.Xml.XmlDocument.dll'. 
'func.exe' (CoreCLR: clrhost): Loaded 'C:\Users\BrynLewis\source\repos\AzureApplicationInsightsClients\ApplicationInsightsAzureFunctionLog4NetClient\bin\Debug\netcoreapp3.1\bin\Microsoft.ApplicationInsights.Log4NetAppender.dll'. 
'func.exe' (CoreCLR: clrhost): Loaded 'C:\Users\BrynLewis\AppData\Local\AzureFunctionsTools\Releases\2.47.1\cli_x64\System.Reflection.TypeExtensions.dll'. 
Application Insights Telemetry: {"name":"Microsoft.ApplicationInsights.64b1950b90bb46aaa36c26f5dce0cad3.Message","time":"2020-04-09T09:22:33.2274370Z","iKey":"1234567890123-1234-12345-123456789012","tags":{"ai.cloud.roleInstance":"DESKTOP-C9IPNQ1","ai.operation.id":"bc6c4d10cebd954c9d815ad06add2582","ai.operation.parentId":"|bc6c4d10cebd954c9d815ad06add2582.d8fa83b88b175348.","ai.operation.name":"ApplicationInsightsTimerLog4Net","ai.location.ip":"0.0.0.0","ai.internal.sdkVersion":"log4net:2.13.1-12554","ai.internal.nodeName":"DESKTOP-C9IPNQ1"},"data":{"baseType":"MessageData","baseData":{"ver":2,"message":"This is a Log4Net Info message","severityLevel":"Information","properties":{"Domain":"NOT AVAILABLE","InvocationId":"91063ef9-70d0-4318-a1e0-e49ade07c51b","ThreadName":"14","ClassName":"?","LogLevel":"Information","ProcessId":"15824","Category":"Function.ApplicationInsightsTimerLog4Net","MethodName":"?","Identity":"NOT AVAILABLE","FileName":"?","LoggerName":"ApplicationInsightsAzureFunctionLog4NetClient.ApplicationInsightsTimer","LineNumber":"?"}}}}

The latest code for my Azure Function Log4net to Applications Insights sample is available on here.

“Don’t forget to flush” .Net Core Application Insights

This post updates a previous post “Don’t forget to flush Application insights Revisited” for .Net Core 3.X and shows the small change required by the deprecation of on of the TelemetryClient constructor overloads.

warning CS0618: ‘TelemetryClient.TelemetryClient()’ is obsolete: ‘We do not recommend using TelemetryConfiguration.Active on .NET Core. See https://github.com/microsoft/ApplicationInsights-dotnet/issues/1152 for more details’

   class Program
   {
      static void Main(string[] args)
      {
#if INSTRUMENTATION_KEY_TELEMETRY_CONFIGURATION
         if (args.Length != 1)
         {
            Console.WriteLine("Usage AzureApplicationInsightsClientConsole <instrumentationKey>");
            return;
         }

         TelemetryConfiguration telemetryConfiguration = new TelemetryConfiguration(args[0]);
         TelemetryClient telemetryClient = new TelemetryClient(telemetryConfiguration);
         telemetryClient.TrackTrace("INSTRUMENTATION_KEY_TELEMETRY_CONFIGURATION", SeverityLevel.Information);
#endif
#if INSTRUMENTATION_KEY_APPLICATION_INSIGHTS_CONFIG
         TelemetryConfiguration telemetryConfiguration = TelemetryConfiguration.CreateDefault();
         TelemetryClient telemetryClient = new TelemetryClient(telemetryConfiguration);
         telemetryClient.TrackTrace("INSTRUMENTATION_KEY_APPLICATION_INSIGHTS_CONFIG", SeverityLevel.Information);
#endif
         telemetryClient.Context.User.Id = Environment.UserName;
         telemetryClient.Context.Device.Id = Environment.MachineName;
         telemetryClient.Context.Operation.Name = "Test harness";

         telemetryClient.TrackTrace("This is a .Net Core AI API Verbose message", SeverityLevel.Verbose);
         telemetryClient.TrackTrace("This is a .Net Core AI API Information message", SeverityLevel.Information);
         telemetryClient.TrackTrace("This is a .Net Core AI API Warning message", SeverityLevel.Warning);
         telemetryClient.TrackTrace("This is a .Net Core AI API Error message", SeverityLevel.Error);
         telemetryClient.TrackTrace("This is a .Net Core AI API Critical message", SeverityLevel.Critical);

         telemetryClient.Flush();

         telemetryConfiguration.Dispose(); // In real-world use a using or similar approach to ensure cleaned up

         Console.WriteLine("Press <enter> to exit");
         Console.ReadLine();
      }
   }

A sample project is available here

“Don’t forget to flush” Application Insights Revisited

This post revisits a previous post “Don’t forget to flush” Application insights and shows how to configure the instrumentation key in code or via the ApplicationInsights.config file.

 class Program
   {
      static void Main(string[] args)
      {
#if INSTRUMENTATION_KEY_TELEMETRY_CONFIGURATION
         if (args.Length != 1)
         {
            Console.WriteLine("Usage AzureApplicationInsightsClientConsole <instrumentationKey>");
            return;
         }

         TelemetryConfiguration telemetryConfiguration = new TelemetryConfiguration(args[0]);
         TelemetryClient telemetryClient = new TelemetryClient(telemetryConfiguration);
         telemetryClient.TrackTrace("INSTRUMENTATION_KEY_TELEMETRY_CONFIGURATION", SeverityLevel.Information);
#endif
#if INSTRUMENTATION_KEY_APPLICATION_INSIGHTS_CONFIG
         TelemetryClient telemetryClient = new TelemetryClient();
         telemetryClient.TrackTrace("INSTRUMENTATION_KEY_APPLICATION_INSIGHTS_CONFIG", SeverityLevel.Information);
#endif
         telemetryClient.TrackTrace("This is an AI API Verbose message", SeverityLevel.Verbose);
         telemetryClient.TrackTrace("This is an AI API Information message", SeverityLevel.Information);
         telemetryClient.TrackTrace("This is an AI API Warning message", SeverityLevel.Warning);
         telemetryClient.TrackTrace("This is an AI API Error message", SeverityLevel.Error);
         telemetryClient.TrackTrace("This is an AI API Critical message", SeverityLevel.Critical);

         telemetryClient.Flush();

         Console.WriteLine("Press <enter> to exit");
         Console.ReadLine();
      }

A sample project is available here

Azure Function Log4Net configuration

This post was inspired by the couple of hours lost from my life yesterday while I figured out how to get Apache Log4Net and Azure Application Insights working in an Azure Function built with .Net Core 2.X.

After extensive searching I found a couple of relevant blog posts but these had complex approaches and I wanted to keep the churn in the codebase I was working on to an absolute minimum.

With the different versions of the libraries involved (Late March 2019) this was what worked for me so YMMV. To provide the simplest possible example I have created a TimerTrigger which logs information via Log4Net to Azure Application Insights.

Initially the Log4Net configuration wasn’t loaded because its location is usually configured in the AssemblyInfo.cs file and .Net Core 2.x code doesn’t have one.

// You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.0.0")]
[assembly: log4net.Config.XmlConfigurator]

I figured I would have to manually load the Log4Net configuration and had to look at the file system of machine running the function to figure out where the Log4Net XML configuration file was getting copied to.

The “Copy to output directory” setting is important

Then I had to get the Dependency Injection (DI) framework to build an ExecutionContext for me so I could get the FunctionAppDirectory to combine with the Log4Net config file name. I used Path.Combine which is more robust and secure than manually concatenating segments of a path together.

/*
    Copyright ® 2019 March devMobile Software, All Rights Reserved
 
    MIT License
...
*/
namespace ApplicationInsightsAzureFunctionLog4NetClient
{
	using System;
	using System.IO;
	using System.Reflection;
	using log4net;
	using log4net.Config;
	using Microsoft.ApplicationInsights.Extensibility;
	using Microsoft.Azure.WebJobs;

	public static class ApplicationInsightsTimer
	{
		[FunctionName("ApplicationInsightsTimerLog4Net")]
		public static void Run([TimerTrigger("0 */1 * * * *")]TimerInfo myTimer, ExecutionContext executionContext)
		{
			ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

			TelemetryConfiguration.Active.InstrumentationKey = Environment.GetEnvironmentVariable("InstrumentationKey", EnvironmentVariableTarget.Process);

			var logRepository = LogManager.GetRepository(Assembly.GetEntryAssembly());
			XmlConfigurator.Configure(logRepository, new FileInfo(Path.Combine(executionContext.FunctionAppDirectory, "log4net.config")));

			log.Debug("This is a Log4Net Debug message");
			log.Info("This is a Log4Net Info message");
			log.Warn("This is a Log4Net Warning message");
			log.Error("This is a Log4Net Error message");
			log.Fatal("This is a Log4Net Fatal message");

			TelemetryConfiguration.Active.TelemetryChannel.Flush();
		}
	}
}

Log4Net logging in Azure Application Insights

The latest code for my Azure Function Log4net to Applications Insights sample along with some samples for other logging platforms is available on GitHub.

NLog and Application Insights

Another of my clients has an application which uses NLog and sooner or later they are going to want to move their logging to Azure Application Insights.

The application consists of a number of Azure websites and some embedded clients. The Azure applications log information to the local file system on each box but the number of boxes is growing so finding and tracing issues is becoming painful.

//---------------------------------------------------------------------------------
// Copyright (c) 2018, devMobile Software
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//---------------------------------------------------------------------------------
using System;
using Microsoft.ApplicationInsights.Extensibility;
using NLog;

namespace devMobile.Azure.ApplicationInsightsNLogClient
{
   class Program
   {
      private static Logger log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType.ToString());

      static void Main(string[] args)
      {
         if (args.Length != 1)
         {
            Console.WriteLine("Command line argument InstrumentationKey missing");
            return;
         }
         TelemetryConfiguration.Active.InstrumentationKey = args[0];

         log.Trace("This is nLog");
         log.Debug("This is a Debug message");
         log.Info("This is a Info message");
         log.Warn("This is a Warning message");
         log.Error("This is an Error message");
         log.Fatal("This is a Fatal message");

         new Microsoft.ApplicationInsights.TelemetryClient().Flush();
      }
   }
}

Sample code ApplicationInsightsNLogClient

Two clients to go, one which uses serilog the other has a DIY system which I’m ignoring as long as possible.

Microsoft Enterprise Library and Application Insights

One of my clients has a largish application (120+ projects) which uses the Microsoft Patterns and Practices Enterprise Library V6 data access, exception handling, logging and transient fault handling blocks.

To get consistent logging across Classic Cloud services, Azure websites and Azure functions etc. we are in the process of moving all our diagnostics to Azure application insights.

My proof of concept uses a community developed Enterprise Library listener and it appears to be working well.

Beware the Visual Studio configuration tool plug-in rewrites and application config file removing the application insights enterprise library trace listener setup.

The code for a smallest example application is below (I pass the instrumentation key as a command line parameter).

//---------------------------------------------------------------------------------
// Copyright (c) 2018, devMobile Software
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//---------------------------------------------------------------------------------
using System;
using Microsoft.ApplicationInsights.Extensibility;
using Microsoft.Practices.EnterpriseLibrary.Logging;
using Microsoft.Practices.EnterpriseLibrary.ExceptionHandling;
namespace ApplicationInsightsEnterpriseLibraryClient
{
   class Program
   {
      static void Main(string[] args)
      {
         if (args.Length != 1)
         {
            Console.WriteLine("Command line argument InstrumentationKey missing");
            return;
         }
         TelemetryConfiguration.Active.InstrumentationKey = args[0];

         LogWriterFactory logWriterFactory = new LogWriterFactory();
         LogWriter logWriter = logWriterFactory.Create();
         Logger.SetLogWriter(logWriter);

         ExceptionManager exceptionManager = new ExceptionPolicyFactory().CreateManager();
         ExceptionPolicy.SetExceptionManager(exceptionManager);

         logWriter.Write("This is Entlib", "General");

         logWriter.Write("Application startup", "Startup");

         logWriter.Write("General category", "General");
         logWriter.Write(new LogEntry() { Severity = System.Diagnostics.TraceEventType.Error, Categories = { "General" }, Message = "General category more complex overload", Title = "Dumpster fire" });

         try
         {
            throw new ApplicationException("Something bad has happened");
         }
         catch (Exception ex)
         {
            bool rethrow = ExceptionPolicy.HandleException(ex, "ProgramMain");
            if (rethrow)
               throw;
         }

         logWriter.Write("Application shutdown", "Shutdown");

         new Microsoft.ApplicationInsights.TelemetryClient().Flush();
      }
   }
}

Sample project ApplicationInsightsEnterpriseLibraryClient

Thanks to bveerendrakumar for sharing your code

“Don’t forget to flush” Application Insights

Revisited March 2020

An Azure solution I was working on had a .Net console application which ran on a server at the customer’s premises. It was scheduled task that uploaded some files to azure blob storage every 5 minutes.

To help with debugging I added support for Azure application Insights but after monitoring the application for a while I noticed some shutdown events were not getting uploaded.

Initially I was a bit confused because when I ran the application on my desktop it worked fine (It works on my machine). I found this was because when launched from the debugger the application would upload any files it found then wait until I pressed to exit and this was enough time for the shutdown messages to get uploaded.

The code for a smallest example application is below (I pass the instrumentation key as a command line parameter).

//---------------------------------------------------------------------------------
// Copyright (c) 2018, devMobile Software
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//---------------------------------------------------------------------------------
using System;
using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.Extensibility;

namespace devMobile.Azure.ApplicationInsightsClientConsole
{
   class Program
   {
      static void Main(string[] args)
      {
         if (args.Length != 1)
         {
            Console.WriteLine("Command line argument InstrumentationKey missing");
            return;
         }
         TelemetryConfiguration.Active.InstrumentationKey = args[0];

         TelemetryClient telemetryClient = new TelemetryClient();

         telemetryClient.TrackTrace("This is Application Insights native");

         telemetryClient.TrackTrace("Application startup");

         // application does stuff

         telemetryClient.TrackTrace("Application shutdown");

         telemetryClient.Flush();
      }
   }
}

Sample project AzureApplicationInsightsClientConsole

Azure IoT Hub nRF24L01 Windows 10 IoT Core Field Gateway

This project is now live on Hackster.IO and github.com with sample *duino, Devduino and Netduino clients. While building the AdaFruit.IO field gateway, Azure IOT Hub field gateways and sample clients I changed the structure of the message payload and spent a bit of time removing non-core functionality and code.

The diagnostics logging code was refactored several times and after reading this reference on docs.Microsoft.com I settled on the published approach.

I considered using the built in Universal Windows Platform (UWP) application data class but this would have made configuration in the field hard for most of the targeted users school students & IT departments.

I have the application running at my house and it has proved pretty robust, last week I though it had crashed because the telemetry data stopped for about 20 minutes. I had a look at the Device portal and it was because Windows 10 IoT core had downloaded some updates, applied them and then rebooted automatically (as configured).

I put a socket on the Raspberry PI nRF24L01 Shield rather than soldering the module to the board so that I could compare the performance of the Low and High power modules. The antenna end of the high power module tends to droop so I put a small piece of plastic foam underneath to prop them up.

I had code to generate an empty JSON configuration but I removed that as it added complexity compared to putting a sample in the github repository.

I considered using a binary format (the nRF24L01 max message length is 32 bytes) but the code required to make it sufficiently flexible rapidly got out of hand and as most of my devices didn’t have a lot of sensors (battery/solar powered *duinos) and it wasn’t a major hassle to send another message so I removed it.

I need to tidy up the project and remove the unused Visual Assets and have a look at the automated update support.