Windows 10 IoT Core Cognitive Services Azure IoT Hub Client

This application builds on Windows 10 IoT Core Cognitive Services Vision API client. It uses my Lego brick classifier model and a new m&m object detection model.

m&m counter test rig

I created a new Visual Studio 2017 Windows IoT Core project and copied across the Windows 10 IoT Core Cognitive Services Custom Vision API code, (changing the namespace and manifest details) and added the Azure Devices Client NuGet package.

Azure Devices Client NuGet

In the start up code I added code to initialise the Azure IoT Hub client, retrieve the device twin settings, and update the device twin properties.

try
{
	this.azureIoTHubClient = DeviceClient.CreateFromConnectionString(this.azureIoTHubConnectionString, this.transportType);
}
catch (Exception ex)
{
	this.logging.LogMessage("AzureIOT Hub DeviceClient.CreateFromConnectionString failed " + ex.Message, LoggingLevel.Error);
	return;
}

try
{
	TwinCollection reportedProperties = new TwinCollection();

	// This is from the OS
	reportedProperties["Timezone"] = TimeZoneSettings.CurrentTimeZoneDisplayName;
	reportedProperties["OSVersion"] = Environment.OSVersion.VersionString;
	reportedProperties["MachineName"] = Environment.MachineName;

	reportedProperties["ApplicationDisplayName"] = package.DisplayName;
	reportedProperties["ApplicationName"] = packageId.Name;
	reportedProperties["ApplicationVersion"] = string.Format($"{version.Major}.{version.Minor}.{version.Build}.{version.Revision}");

	// Unique identifier from the hardware
	SystemIdentificationInfo systemIdentificationInfo = SystemIdentification.GetSystemIdForPublisher();
	using (DataReader reader = DataReader.FromBuffer(systemIdentificationInfo.Id))
	{
		byte[] bytes = new byte[systemIdentificationInfo.Id.Length];
		reader.ReadBytes(bytes);
		reportedProperties["SystemId"] = BitConverter.ToString(bytes);
	}
	this.azureIoTHubClient.UpdateReportedPropertiesAsync(reportedProperties).Wait();
}
catch (Exception ex)
{
	this.logging.LogMessage("Azure IoT Hub client UpdateReportedPropertiesAsync failed " + ex.Message, LoggingLevel.Error);
	return;
}

try
{
	LoggingFields configurationInformation = new LoggingFields();

	Twin deviceTwin = this.azureIoTHubClient.GetTwinAsync().GetAwaiter().GetResult();

	if (!deviceTwin.Properties.Desired.Contains("ImageUpdateDue") || !TimeSpan.TryParse(deviceTwin.Properties.Desired["ImageUpdateDue"].value.ToString(), out imageUpdateDue))
	{
		this.logging.LogMessage("DeviceTwin.Properties ImageUpdateDue setting missing or invalid format", LoggingLevel.Warning);
		return;
	}
	configurationInformation.AddTimeSpan("ImageUpdateDue", imageUpdateDue);

	if (!deviceTwin.Properties.Desired.Contains("ImageUpdatePeriod") || !TimeSpan.TryParse(deviceTwin.Properties.Desired["ImageUpdatePeriod"].value.ToString(), out imageUpdatePeriod))
	{
		this.logging.LogMessage("DeviceTwin.Properties ImageUpdatePeriod setting missing or invalid format", LoggingLevel.Warning);
		return;
	}
…
	if (!deviceTwin.Properties.Desired.Contains("DebounceTimeout") || !TimeSpan.TryParse(deviceTwin.Properties.Desired["DebounceTimeout"].value.ToString(), out debounceTimeout))
	{
		this.logging.LogMessage("DeviceTwin.Properties DebounceTimeout setting missing or invalid format", LoggingLevel.Warning);
		return;
	}
				configurationInformation.AddTimeSpan("DebounceTimeout", debounceTimeout);

	this.logging.LogEvent("Configuration settings", configurationInformation);
}
catch (Exception ex)
{
	this.logging.LogMessage("Azure IoT Hub client GetTwinAsync failed or property missing/invalid" + ex.Message, LoggingLevel.Error);
	return;
}

When the digital input (configured in the app.settings file) is strobed or the timer fires (configured in the device properties) an image is captured, uploaded to Azure Cognitive Services Custom Vision for processing.

The returned results are then post processed to make them Azure IoT Central friendly, and finally uploaded to an Azure IoT Hub.

For testing I have used a simple object detection model.

I trained the model with images of 6 different colours of m&m’s.

For my first dataset I tagged the location of a single m&m of each of the colour in 15 images.

Testing the training of the model

I then trained the model multiple times adding additional images where the model was having trouble distiguishing colours.

The published name comes from the training performance tab

Project settings

The projectID, AzureCognitiveServicesSubscriptionKey (PredictionKey) and PublishedName (From the Performance tab in project) are from the custom vision project properties.

All of the Custom Vision model settings are configured in the Azure IoT Hub device properties.

The app.settings file contains only the hardware configuration settings and the Azure IoT Hub connection string.

{
  "InterruptPinNumber": 24,
  "interruptTriggerOn": "RisingEdge",
  "DisplayPinNumber": 35,
  "AzureIoTHubConnectionString": "",
  "TransportType": "Mqtt"
} 

The LED connected to the display pin is illuminated while an image is being processed or briefly flashed if the insufficient time between image captures has passed.

The image data is post processed differently based on the model.

// Post process the predictions based on the type of model
switch (modelType)
{
	case ModelType.Classification:
		// Use only the tags above the specified minimum probability
		foreach (var prediction in imagePrediction.Predictions)
		{
			if (prediction.Probability >= probabilityThreshold)
			{
				// Display and log the individual tag probabilities
				Debug.WriteLine($" Tag valid:{prediction.TagName} {prediction.Probability:0.00}");
				imageInformation.AddDouble($"Tag valid:{prediction.TagName}", prediction.Probability);
					telemetryDataPoint.Add(prediction.TagName, prediction.Probability);
			}
		}
		break;

	case ModelType.Detection:
		// Group the tags to get the count, include only the predictions above the specified minimum probability
		var groupedPredictions = from prediction in imagePrediction.Predictions
										 where prediction.Probability >= probabilityThreshold
										 group prediction by new { prediction.TagName }
				into newGroup
										 select new
										 {
											 TagName = newGroup.Key.TagName,
											 Count = newGroup.Count(),
										 };

		// Display and log the agregated predictions
		foreach (var prediction in groupedPredictions)
		{
			Debug.WriteLine($" Tag valid:{prediction.TagName} {prediction.Count}");
			imageInformation.AddInt32($"Tag valid:{prediction.TagName}", prediction.Count);
			telemetryDataPoint.Add(prediction.TagName, prediction.Count);
		}
		break;
	default:
		throw new ArgumentException("ModelType Invalid");
}

For a classifier only the tags with a probability greater than or equal the specified threshold are uploaded.

For a detection model the instances of each tag are counted. Only the tags with a prediction value greater than the specified threshold are included in the count.

19-08-14 05:26:14 Timer triggered
Prediction count 33
 Tag:Blue 0.0146500813
 Tag:Blue 0.61186564
 Tag:Blue 0.0923164859
 Tag:Blue 0.7813785
 Tag:Brown 0.0100603029
 Tag:Brown 0.128318727
 Tag:Brown 0.0135991769
 Tag:Brown 0.687322736
 Tag:Brown 0.846672833
 Tag:Brown 0.1826635
 Tag:Brown 0.0183384717
 Tag:Green 0.0200069249
 Tag:Green 0.367765248
 Tag:Green 0.011428359
 Tag:Orange 0.678825438
 Tag:Orange 0.03718319
 Tag:Orange 0.8643157
 Tag:Orange 0.0296728313
 Tag:Red 0.02141669
 Tag:Red 0.7183208
 Tag:Red 0.0183610674
 Tag:Red 0.0130951973
 Tag:Red 0.82097
 Tag:Red 0.0618815944
 Tag:Red 0.0130757084
 Tag:Yellow 0.04150853
 Tag:Yellow 0.0106579047
 Tag:Yellow 0.0210028365
 Tag:Yellow 0.03392527
 Tag:Yellow 0.129197285
 Tag:Yellow 0.8089519
 Tag:Yellow 0.03723789
 Tag:Yellow 0.74729687
 Tag valid:Blue 2
 Tag valid:Brown 2
 Tag valid:Orange 2
 Tag valid:Red 2
 Tag valid:Yellow 2
 05:26:17 AzureIoTHubClient SendEventAsync start
 05:26:18 AzureIoTHubClient SendEventAsync finish

The debugging output of the application includes the different categories identified in the captured image.

I found my small model was pretty good at detection of individual m&m as long as the ambient lighting was consistent, and the background fairly plain.

Sample image from test rig

Every so often the camera contrast setting went bad and could only be restored by restarting the device which needs further investigation.

Image with contrast problem

This application could be the basis for projects which need to run an Azure Cognitive Services model to count or classify then upload the results to an Azure IoT Hub or Azure IoT Central for presentation.

With a suitable model this application could be used to count the number of people in a room, which could be displayed along with the ambient temperature, humidity, CO2, and noise levels in Azure IoT Central.

The code for this application is available In on GitHub.

Windows 10 IoT Core Cognitive Services Custom Vision API

This application was inspired by one of teachers I work with wanting to count ducks in the stream on the school grounds. The school was having problems with water quality and the they wanted to see if the number of ducks was a factor. (Manually counting the ducks several times a day would be impractical).

I didn’t have a source of training images so built an image classifier using my son’s Lego for testing. In a future post I will build an object detection model once I have some sample images of the stream captured by my Windows 10 IoT Core time lapse camera application.

To start with I added the Azure Cognitive Services Custom Vision API NuGet packages to a new Visual Studio 2017 Windows IoT Core project.

Azure Custom Vision Service NuGet packacges

Then I initialised the Computer Vision API client

try
{
	this.customVisionClient = new CustomVisionPredictionClient(new System.Net.Http.DelegatingHandler[] { })
	{
		ApiKey = this.azureCognitiveServicesSubscriptionKey,
		Endpoint = this.azureCognitiveServicesEndpoint,
	};
}
catch (Exception ex)
{
	this.logging.LogMessage("Azure Cognitive Services Custom Vision Client configuration failed " + ex.Message, LoggingLevel.Error);
	return;
}

Every time the digital input is strobed by the infra red proximity sensor or touch button an image is captured, uploaded for processing, and results displayed in the debug output.

For testing I have used a simple multiclass classifier that I trained with a selection of my son’s Lego. I tagged the brick size height x width x length (1x2x3, smallest of width/height first) and colour (red, green, blue etc.)

Azure Cognitive Services Classifier project creation
Custom vision projects
Lego classifier project properties

The projectID, AzureCognitiveServicesSubscriptionKey (PredictionKey) and PublishedName (From the Performance tab in project) in the app.settings file come from the custom vision project properties.

{
  "InterruptPinNumber": 24,
  "interruptTriggerOn": "RisingEdge",
  "DisplayPinNumber": 35,
  "AzureCognitiveServicesEndpoint": "https://australiaeast.api.cognitive.microsoft.com",
  "AzureCognitiveServicesSubscriptionKey": "41234567890123456789012345678901s,
  "DebounceTimeout": "00:00:30",
  "PublishedName": "LegoBrickClassifierV3",
  "TriggerTag": "1x2x4",
  "TriggerThreshold": "0.4",
  "ProjectID": "c1234567-abcdefghijklmn-1234567890ab"
} 

The sample application only supports one trigger tag + probability and if this condition satisfied the Light Emitting Diode (LED) is turned on for 5 seconds. If an image is being processed or the minimum period between images has not passed the LED is illuminated for 5 milliseconds .

private async void InterruptGpioPin_ValueChanged(GpioPin sender, GpioPinValueChangedEventArgs args)
{
	DateTime currentTime = DateTime.UtcNow;
	Debug.WriteLine($"Digital Input Interrupt {sender.PinNumber} triggered {args.Edge}");

	if (args.Edge != this.interruptTriggerOn)
	{
		return;
	}

	// Check that enough time has passed for picture to be taken
	if ((currentTime - this.imageLastCapturedAtUtc) < this.debounceTimeout)
	{
		this.displayGpioPin.Write(GpioPinValue.High);
		this.displayOffTimer.Change(this.timerPeriodDetectIlluminated, this.timerPeriodInfinite);
		return;
	}

	this.imageLastCapturedAtUtc = currentTime;

	// Just incase - stop code being called while photo already in progress
	if (this.cameraBusy)
	{
		this.displayGpioPin.Write(GpioPinValue.High);
		this.displayOffTimer.Change(this.timerPeriodDetectIlluminated, this.timerPeriodInfinite);
		return;
	}

	this.cameraBusy = true;

	try
	{
		using (Windows.Storage.Streams.InMemoryRandomAccessStream captureStream = new Windows.Storage.Streams.InMemoryRandomAccessStream())
		{
			this.mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), captureStream).AsTask().Wait();
			captureStream.FlushAsync().AsTask().Wait();
			captureStream.Seek(0);

			IStorageFile photoFile = await KnownFolders.PicturesLibrary.CreateFileAsync(ImageFilename, CreationCollisionOption.ReplaceExisting);
			ImageEncodingProperties imageProperties = ImageEncodingProperties.CreateJpeg();
			await this.mediaCapture.CapturePhotoToStorageFileAsync(imageProperties, photoFile);

			ImageAnalysis imageAnalysis = await this.computerVisionClient.AnalyzeImageInStreamAsync(captureStream.AsStreamForRead());

			Debug.WriteLine($"Tag count {imageAnalysis.Categories.Count}");

			if (imageAnalysis.Categories.Intersect(this.categoryList, new CategoryComparer()).Any())
			{
				this.displayGpioPin.Write(GpioPinValue.High);

				// Start the timer to turn the LED off
				this.displayOffTimer.Change(this.timerPeriodFaceIlluminated, this.timerPeriodInfinite);
					}

					LoggingFields imageInformation = new LoggingFields();

					imageInformation.AddDateTime("TakenAtUTC", currentTime);
					imageInformation.AddInt32("Pin", sender.PinNumber);
					Debug.WriteLine($"Categories:{imageAnalysis.Categories.Count}");
					imageInformation.AddInt32("Categories", imageAnalysis.Categories.Count);
					foreach (Category category in imageAnalysis.Categories)
					{
						Debug.WriteLine($" Category:{category.Name} {category.Score}");
						imageInformation.AddDouble($"Category:{category.Name}", category.Score);
					}

					this.logging.LogEvent("Captured image processed by Cognitive Services", imageInformation);
				}
			}
			catch (Exception ex)
			{
				this.logging.LogMessage("Camera photo or save failed " + ex.Message, LoggingLevel.Error);
			}
			finally
			{
				this.cameraBusy = false;
			}
		}

		private void TimerCallback(object state)
		{
			this.displayGpioPin.Write(GpioPinValue.Low);
		}

		internal class CategoryComparer : IEqualityComparer<Category&gt;
		{
			public bool Equals(Category x, Category y)
			{
				if (string.Equals(x.Name, y.Name, StringComparison.OrdinalIgnoreCase))
				{
					return true;
				}

				return false;
			}

			public int GetHashCode(Category obj)
			{
				return obj.Name.GetHashCode();
			}
		}

I found my small model was pretty good at tagging images of Lego bricks as long as the ambient lighting was consistent and the background fairly plain.

When tagging many bricks my ability to distinguish pearl light grey, light grey, sand blue and grey bricks was a problem. I should have started with a limited palette (red, green, blue) of colours and shapes for my models while evaluating different tagging approaches.

The debugging output of the application includes the different categories identified in the captured image.

Digital Input Interrupt 24 triggered RisingEdge
Digital Input Interrupt 24 triggered FallingEdge
Prediction count 54
 Tag:Lime 0.529844046
 Tag:1x1x2 0.4441353
 Tag:Green 0.252290249
 Tag:1x1x3 0.1790101
 Tag:1x2x3 0.132092983
 Tag:Turquoise 0.128928885
 Tag:DarkGreen 0.09383947
 Tag:DarkTurquoise 0.08993266
 Tag:1x2x2 0.08145093
 Tag:1x2x4 0.060960535
 Tag:LightBlue 0.0525473
 Tag:MediumAzure 0.04958712
 Tag:Violet 0.04894981
 Tag:SandGreen 0.048463434
 Tag:LightOrange 0.044860106
 Tag:1X1X1 0.0426577441
 Tag:Azure 0.0416654423
 Tag:Aqua 0.0400410332
 Tag:OliveGreen 0.0387720577
 Tag:Blue 0.035169173
 Tag:White 0.03497391
 Tag:Pink 0.0321456343
 Tag:Transparent 0.0246597622
 Tag:MediumBlue 0.0245670844
 Tag:BrightPink 0.0223842952
 Tag:Flesh 0.0221406389
 Tag:Magenta 0.0208457354
 Tag:Purple 0.0188888311
 Tag:DarkPurple 0.0187285
 Tag:MaerskBlue 0.017609369
 Tag:DarkPink 0.0173041821
 Tag:Lavender 0.0162359159
 Tag:PearlLightGrey 0.0152829709
 Tag:1x1x4 0.0133710662
 Tag:Red 0.0122602312
 Tag:Yellow 0.0118704
 Tag:Clear 0.0114340987
 Tag:LightYellow 0.009903331
 Tag:Black 0.00877647
 Tag:BrightLightYellow 0.00871937349
 Tag:Mediumorange 0.0078356415
 Tag:Tan 0.00738664949
 Tag:Sand 0.00713921571
 Tag:Grey 0.00710422
 Tag:Orange 0.00624707434
 Tag:SandBlue 0.006215865
 Tag:DarkGrey 0.00613187673
 Tag:DarkBlue 0.00578308525
 Tag:DarkOrange 0.003790971
 Tag:DarkTan 0.00348462746
 Tag:LightGrey 0.00321317
 Tag:ReddishBrown 0.00304117263
 Tag:LightBluishGrey 0.00273489812
 Tag:Brown 0.00199119

I’m going to run this application repeatedly, adding more images and retraining the model to see how it performs. Once the model is working wll I’ll try downloading it and running it on a device

Custom Vision Test Harness running on my desk

This sample could be used as a basis for projects like this cat door which stops your pet bringing in dead or wounded animals. The model could be trained with tags to indicate whether the cat is carrying a “present” for their human and locking the door if it is.

RFM69 shield library Part2

Register Dump

Next step was to dump all registers (0x00 RegFifo thru 0x4F RegTemp2) of the RFM69HCW device.

/*
    Copyright ® 2019 May devMobile Software, All Rights Reserved

	 MIT License


	 Permission is hereby granted, free of charge, to any person obtaining a copy
	 of this software and associated documentation files (the "Software"), to deal
    in the Software without restriction, including without limitation the rights
	 to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
	 copies of the Software, and to permit persons to whom the Software is
    furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice shall be included in all
	 copies or substantial portions of the Software.

    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
	 IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
	 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
	 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
	 SOFTWARE

	 RegFiFo 0x00 thru RegPacketConfig2 0x3D

*/
namespace devMobile.IoT.Rfm69Hcw.RegisterScan
{
	using System;
	using System.Diagnostics;
	using System.Threading.Tasks;
	using Windows.ApplicationModel.Background;
	using Windows.Devices.Spi;

	public sealed class Rfm69HcwDevice
	{
		private SpiDevice rfm69Hcw;

		public Rfm69HcwDevice(int chipSelectPin)
		{
			SpiController spiController = SpiController.GetDefaultAsync().AsTask().GetAwaiter().GetResult();
			var settings = new SpiConnectionSettings(chipSelectPin)
			{
				ClockFrequency = 500000,
				Mode = SpiMode.Mode0,
			};

			rfm69Hcw = spiController.GetDevice(settings);
		}

		public Byte RegisterReadByte(byte registerAddress)
		{
			byte[] writeBuffer = new byte[] { registerAddress };
			byte[] readBuffer = new byte[1];
			Debug.Assert(rfm69Hcw != null);

			rfm69Hcw.TransferSequential(writeBuffer, readBuffer);

			return readBuffer[0];
		}
	}

	public sealed class StartupTask : IBackgroundTask
	{
		private const int ChipSelectLine = 0;
		private Rfm69HcwDevice rfm69HcwDevice = new Rfm69HcwDevice(ChipSelectLine);

		public void Run(IBackgroundTaskInstance taskInstance)
		{

			while (true)
			{
				for (byte registerIndex = 0; registerIndex <= 0x4F; registerIndex++)
				{
					byte registerValue = rfm69HcwDevice.RegisterReadByte(registerIndex);

					Debug.WriteLine("Register 0x{0:x2} - Value 0X{1:x2} - Bits {2}", registerIndex, registerValue, Convert.ToString(registerValue, 2).PadLeft(8, '0'));
				}

				Task.Delay(10000).Wait();
			}
		}
	}
}

I checked a selection of values from the Debug output and they matched the defaults in the datasheet e.g. 0x07-RegFrfMsb 0xE4, 0x08-RegFrfMid 0xC0, 0x09-RegFrfLsb 0x00, 0x2C-RegPreamble 0x0, 0x2D-RegPreamble 0x03 and 0x4E-RegTemp1 0x01.

Register 0x00 - Value 0X00 - Bits 00000000
Register 0x01 - Value 0X04 - Bits 00000100
Register 0x02 - Value 0X00 - Bits 00000000
Register 0x03 - Value 0X1a - Bits 00011010
Register 0x04 - Value 0X0b - Bits 00001011
Register 0x05 - Value 0X00 - Bits 00000000
Register 0x06 - Value 0X52 - Bits 01010010
Register 0x07 - Value 0Xe4 - Bits 11100100
Register 0x08 - Value 0Xc0 - Bits 11000000
Register 0x09 - Value 0X00 - Bits 00000000
Register 0x0a - Value 0X41 - Bits 01000001
Register 0x0b - Value 0X40 - Bits 01000000
Register 0x0c - Value 0X02 - Bits 00000010
Register 0x0d - Value 0X92 - Bits 10010010
Register 0x0e - Value 0Xf5 - Bits 11110101
Register 0x0f - Value 0X20 - Bits 00100000
Register 0x10 - Value 0X24 - Bits 00100100
Register 0x11 - Value 0X9f - Bits 10011111
Register 0x12 - Value 0X09 - Bits 00001001
Register 0x13 - Value 0X1a - Bits 00011010
Register 0x14 - Value 0X40 - Bits 01000000
Register 0x15 - Value 0Xb0 - Bits 10110000
Register 0x16 - Value 0X7b - Bits 01111011
Register 0x17 - Value 0X9b - Bits 10011011
Register 0x18 - Value 0X08 - Bits 00001000
Register 0x19 - Value 0X86 - Bits 10000110
Register 0x1a - Value 0X8a - Bits 10001010
Register 0x1b - Value 0X40 - Bits 01000000
Register 0x1c - Value 0X80 - Bits 10000000
Register 0x1d - Value 0X06 - Bits 00000110
Register 0x1e - Value 0X10 - Bits 00010000
Register 0x1f - Value 0X00 - Bits 00000000
Register 0x20 - Value 0X00 - Bits 00000000
Register 0x21 - Value 0X00 - Bits 00000000
Register 0x22 - Value 0X00 - Bits 00000000
Register 0x23 - Value 0X02 - Bits 00000010
Register 0x24 - Value 0Xff - Bits 11111111
Register 0x25 - Value 0X00 - Bits 00000000
Register 0x26 - Value 0X05 - Bits 00000101
Register 0x27 - Value 0X80 - Bits 10000000
Register 0x28 - Value 0X00 - Bits 00000000
Register 0x29 - Value 0Xff - Bits 11111111
Register 0x2a - Value 0X00 - Bits 00000000
Register 0x2b - Value 0X00 - Bits 00000000
Register 0x2c - Value 0X00 - Bits 00000000
Register 0x2d - Value 0X03 - Bits 00000011
Register 0x2e - Value 0X98 - Bits 10011000
Register 0x2f - Value 0X00 - Bits 00000000
Register 0x30 - Value 0X00 - Bits 00000000
Register 0x31 - Value 0X00 - Bits 00000000
Register 0x32 - Value 0X00 - Bits 00000000
Register 0x33 - Value 0X00 - Bits 00000000
Register 0x34 - Value 0X00 - Bits 00000000
Register 0x35 - Value 0X00 - Bits 00000000
Register 0x36 - Value 0X00 - Bits 00000000
Register 0x37 - Value 0X10 - Bits 00010000
Register 0x38 - Value 0X40 - Bits 01000000
Register 0x39 - Value 0X00 - Bits 00000000
Register 0x3a - Value 0X00 - Bits 00000000
Register 0x3b - Value 0X00 - Bits 00000000
Register 0x3c - Value 0X0f - Bits 00001111
Register 0x3d - Value 0X02 - Bits 00000010
Register 0x3e - Value 0X00 - Bits 00000000
Register 0x3f - Value 0X00 - Bits 00000000
Register 0x40 - Value 0X00 - Bits 00000000
Register 0x41 - Value 0X00 - Bits 00000000
Register 0x42 - Value 0X00 - Bits 00000000
Register 0x43 - Value 0X00 - Bits 00000000
Register 0x44 - Value 0X00 - Bits 00000000
Register 0x45 - Value 0X00 - Bits 00000000
Register 0x46 - Value 0X00 - Bits 00000000
Register 0x47 - Value 0X00 - Bits 00000000
Register 0x48 - Value 0X00 - Bits 00000000
Register 0x49 - Value 0X00 - Bits 00000000
Register 0x4a - Value 0X00 - Bits 00000000
Register 0x4b - Value 0X00 - Bits 00000000
Register 0x4c - Value 0X00 - Bits 00000000
Register 0x4d - Value 0X00 - Bits 00000000
Register 0x4e - Value 0X01 - Bits 00000001
Register 0x4f - Value 0X00 - Bits 00000000

Windows 10 IoT Core Cognitive Services Computer Vision API

This application was inspired by one of teachers I work with wanting to check occupancy of different areas in the school library. I had been using the Computer Vision service to try and identify objects around my home and office which had been moderately successful but not terribly useful or accurate.

I added the Azure Cognitive Services Computer Vision API NuGet packages to my Visual Studio 2017 Windows IoT Core project.

Azure Cognitive Services Computer Vision API library

Then I initialised the Computer Vision API client

try
{
	this.computerVisionClient = new ComputerVisionClient(
			 new Microsoft.Azure.CognitiveServices.Vision.ComputerVision.ApiKeyServiceClientCredentials(this.azureCognitiveServicesSubscriptionKey),
			 new System.Net.Http.DelegatingHandler[] { })
	{
		Endpoint = this.azureCognitiveServicesEndpoint,
	};
}
catch (Exception ex)
{
	this.logging.LogMessage("Azure Cognitive Services Computer Vision client configuration failed " + ex.Message, LoggingLevel.Error);
	return;
}

Every time the digital input is strobed by the passive infra red motion detector an image is captured, then uploaded for processing, and finally results displayed. For this sample I’m looking for categories which indicate the image is of a group of people (The categories are configured in the appsettings file)

{
  "InterruptPinNumber": 24,
  "interruptTriggerOn": "RisingEdge",
  "DisplayPinNumber": 35,
  "AzureCognitiveServicesEndpoint": "https://australiaeast.api.cognitive.microsoft.com/",
  "AzureCognitiveServicesSubscriptionKey": "1234567890abcdefghijklmnopqrstuv",
  "ComputerVisionCategoryNames":"people_group,people_many",
  "LocalImageFilenameFormatLatest": "{0}.jpg",
  "LocalImageFilenameFormatHistoric": "{1:yyMMddHHmmss}.jpg",
  "DebounceTimeout": "00:00:30"
} 

If any of the specified categories are identified in the image I illuminate a Light Emitting Diode (LED) for 5 seconds, if an image is being processed or the minimum period between images has not passed the LED is illuminated for 5 milliseconds .

		private async void InterruptGpioPin_ValueChanged(GpioPin sender, GpioPinValueChangedEventArgs args)
		{
			DateTime currentTime = DateTime.UtcNow;
			Debug.WriteLine($"Digital Input Interrupt {sender.PinNumber} triggered {args.Edge}");

			if (args.Edge != this.interruptTriggerOn)
			{
				return;
			}

			// Check that enough time has passed for picture to be taken
			if ((currentTime - this.imageLastCapturedAtUtc) < this.debounceTimeout)
			{
				this.displayGpioPin.Write(GpioPinValue.High);
				this.displayOffTimer.Change(this.timerPeriodDetectIlluminated, this.timerPeriodInfinite);
				return;
			}

			this.imageLastCapturedAtUtc = currentTime;

			// Just incase - stop code being called while photo already in progress
			if (this.cameraBusy)
			{
				this.displayGpioPin.Write(GpioPinValue.High);
				this.displayOffTimer.Change(this.timerPeriodDetectIlluminated, this.timerPeriodInfinite);
				return;
			}

			this.cameraBusy = true;

			try
			{
				using (Windows.Storage.Streams.InMemoryRandomAccessStream captureStream = new Windows.Storage.Streams.InMemoryRandomAccessStream())
				{
					this.mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), captureStream).AsTask().Wait();
					captureStream.FlushAsync().AsTask().Wait();
					captureStream.Seek(0);

					IStorageFile photoFile = await KnownFolders.PicturesLibrary.CreateFileAsync(ImageFilename, CreationCollisionOption.ReplaceExisting);
					ImageEncodingProperties imageProperties = ImageEncodingProperties.CreateJpeg();
					await this.mediaCapture.CapturePhotoToStorageFileAsync(imageProperties, photoFile);

					ImageAnalysis imageAnalysis = await this.computerVisionClient.AnalyzeImageInStreamAsync(captureStream.AsStreamForRead());

					Debug.WriteLine($"Tag count {imageAnalysis.Categories.Count}");

					if (imageAnalysis.Categories.Intersect(this.categoryList, new CategoryComparer()).Any())
					{
						this.displayGpioPin.Write(GpioPinValue.High);

						// Start the timer to turn the LED off
						this.displayOffTimer.Change(this.timerPeriodFaceIlluminated, this.timerPeriodInfinite);
					}

					LoggingFields imageInformation = new LoggingFields();

					imageInformation.AddDateTime("TakenAtUTC", currentTime);
					imageInformation.AddInt32("Pin", sender.PinNumber);
					Debug.WriteLine($"Categories:{imageAnalysis.Categories.Count}");
					imageInformation.AddInt32("Categories", imageAnalysis.Categories.Count);
					foreach (Category category in imageAnalysis.Categories)
					{
						Debug.WriteLine($" Category:{category.Name} {category.Score}");
						imageInformation.AddDouble($"Category:{category.Name}", category.Score);
					}

					this.logging.LogEvent("Captured image processed by Cognitive Services", imageInformation);
				}
			}
			catch (Exception ex)
			{
				this.logging.LogMessage("Camera photo or save failed " + ex.Message, LoggingLevel.Error);
			}
			finally
			{
				this.cameraBusy = false;
			}
		}

		private void TimerCallback(object state)
		{
			this.displayGpioPin.Write(GpioPinValue.Low);
		}

		internal class CategoryComparer : IEqualityComparer<Category&gt;
		{
			public bool Equals(Category x, Category y)
			{
				if (string.Equals(x.Name, y.Name, StringComparison.OrdinalIgnoreCase))
				{
					return true;
				}

				return false;
			}

			public int GetHashCode(Category obj)
			{
				return obj.Name.GetHashCode();
			}
		}

I found that the Computer vision service was pretty good at categorising photos of images like this displayed on my second monitor as containing a group of people.

The debugging output of the application includes the different categories identified in the captured image.

Digital Input Interrupt 24 triggered RisingEdge
Digital Input Interrupt 24 triggered FallingEdge
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Diagnostics.DiagnosticSource.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Collections.NonGeneric.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Runtime.Serialization.Formatters.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Diagnostics.TraceSource.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Collections.Specialized.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Drawing.Primitives.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Runtime.Serialization.Primitives.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Data.Common.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Xml.ReaderWriter.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Private.Xml.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'Anonymously Hosted DynamicMethods Assembly'. 
Tag count 1
Categories:1
 Category:people_group 0.8671875
The thread 0x634 has exited with code 0 (0x0).

I used an infrared motion sensor to trigger capture and processing of an image to simulate a application for detecting if there is a group of people in an area of the school library.

I’m going to run this application alongside one of my time-lapse applications to record a days worth of images and manually check the accuracy of the image categorisation. I think that camera location maybe important as well so I’ll try a selection of different USB cameras and locations.

Trial PIR triggered computer vision client

I also found the small PIR motion detector didn’t work very well in a larger space so I’m going to trial a configurable sensor and a repurposed burglar alarm sensor.

Windows 10 IoT Core Cognitive Services Face API

After building a series of Windows 10 IoT Core applications to capture images and store them

I figured some sample applications which used Azure Cognitive Services Vision Services to process captured images would be interesting.

This application was inspired by one of my students who has been looking at an Arduino based LoRa wireless connected sensor for monitoring Ultraviolet(UV) light levels and wanted to check that juniors at the school were wearing their hats on sunny days before going outside.

First I needed create a Cognitive Services instance and get the subscription key and endpoint.

Azure Cognitive Services Instance Creation

Then I added the Azure Cognitive Services Face API NuGet packages into my Visual Studio Windows IoT Core project

Azure Cognitive Services Vision Face API library

Then initialise the Face API client

try
{
	this.faceClient = new FaceClient(
			 new Microsoft.Azure.CognitiveServices.Vision.Face.ApiKeyServiceClientCredentials(this.azureCognitiveServicesSubscriptionKey),
											 new System.Net.Http.DelegatingHandler[] { })
	{
		Endpoint = this.azureCognitiveServicesEndpoint,
	};
}
catch (Exception ex)
{
	this.logging.LogMessage("Azure Cognitive Services Face Client configuration failed " + ex.Message, LoggingLevel.Error);
	return;
}

Then every time a digital input is strobed and image is captured, then uploaded for processing, and finally results displayed. The interrupt handler has code to stop re-entrancy and contactor bounce causing issues. I also requested that the Face service include age and gender attributes with associated confidence values.

If a face is found in the image I illuminate a Light Emitting Diode (LED) for 5 seconds, if an image is being processed or the minimum period between images has not passed the LED is illuminated for 5 milliseconds .

private async void InterruptGpioPin_ValueChanged(GpioPin sender, GpioPinValueChangedEventArgs args)
{
	DateTime currentTime = DateTime.UtcNow;
	Debug.WriteLine($"Digital Input Interrupt {sender.PinNumber} triggered {args.Edge}");

	if (args.Edge != this.interruptTriggerOn)
	{
		return;
	}

	// Check that enough time has passed for picture to be taken
	if ((currentTime - this.imageLastCapturedAtUtc) < this.debounceTimeout)
	{
		this.displayGpioPin.Write(GpioPinValue.High);
		this.displayOffTimer.Change(this.timerPeriodDetectIlluminated, this.timerPeriodInfinite);
		return;
	}

	this.imageLastCapturedAtUtc = currentTime;

	// Just incase - stop code being called while photo already in progress
	if (this.cameraBusy)
	{
		this.displayGpioPin.Write(GpioPinValue.High);
		this.displayOffTimer.Change(this.timerPeriodDetectIlluminated, this.timerPeriodInfinite);
		return;
	}

	this.cameraBusy = true;

	try
	{
		using (Windows.Storage.Streams.InMemoryRandomAccessStream captureStream = new Windows.Storage.Streams.InMemoryRandomAccessStream())
		{
			this.mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), captureStream).AsTask().Wait();
			captureStream.FlushAsync().AsTask().Wait();
			captureStream.Seek(0);
			IStorageFile photoFile = await KnownFolders.PicturesLibrary.CreateFileAsync(ImageFilename, CreationCollisionOption.ReplaceExisting);
			ImageEncodingProperties imageProperties = ImageEncodingProperties.CreateJpeg();
			await this.mediaCapture.CapturePhotoToStorageFileAsync(imageProperties, photoFile);

			IList<FaceAttributeType&gt; returnfaceAttributes = new List<FaceAttributeType&gt;();
			returnfaceAttributes.Add(FaceAttributeType.Gender);
			returnfaceAttributes.Add(FaceAttributeType.Age);

			IList<DetectedFace&gt; detectedFaces = await this.faceClient.Face.DetectWithStreamAsync(captureStream.AsStreamForRead(), returnFaceAttributes: returnfaceAttributes);

			Debug.WriteLine($"Count {detectedFaces.Count}");

			if (detectedFaces.Count &gt; 0)
			{
				this.displayGpioPin.Write(GpioPinValue.High);

						// Start the timer to turn the LED off
				this.displayOffTimer.Change(this.timerPeriodFaceIlluminated, this.timerPeriodInfinite);
			}

			LoggingFields imageInformation = new LoggingFields();
			imageInformation.AddDateTime("TakenAtUTC", currentTime);
			imageInformation.AddInt32("Pin", sender.PinNumber);
			imageInformation.AddInt32("Faces", detectedFaces.Count);
			foreach (DetectedFace detectedFace in detectedFaces)
			{
				Debug.WriteLine("Face");
				if (detectedFace.FaceId.HasValue)
				{
					imageInformation.AddGuid("FaceId", detectedFace.FaceId.Value);
					Debug.WriteLine($" Id:{detectedFace.FaceId.Value}");
				}
				imageInformation.AddInt32("Left", detectedFace.FaceRectangle.Left);
				imageInformation.AddInt32("Width", detectedFace.FaceRectangle.Width);
				imageInformation.AddInt32("Top", detectedFace.FaceRectangle.Top);
				imageInformation.AddInt32("Height", detectedFace.FaceRectangle.Height);
				Debug.WriteLine($" L:{detectedFace.FaceRectangle.Left} W:{detectedFace.FaceRectangle.Width} T:{detectedFace.FaceRectangle.Top} H:{detectedFace.FaceRectangle.Height}");
				if (detectedFace.FaceAttributes != null)
				{
					if (detectedFace.FaceAttributes.Gender.HasValue)
					{
						imageInformation.AddString("Gender", detectedFace.FaceAttributes.Gender.Value.ToString());
						Debug.WriteLine($" Gender:{detectedFace.FaceAttributes.Gender.ToString()}");
					}

					if (detectedFace.FaceAttributes.Age.HasValue)
					{
						imageInformation.AddDouble("Age", detectedFace.FaceAttributes.Age.Value);
						Debug.WriteLine($" Age:{detectedFace.FaceAttributes.Age.Value.ToString("F1")}");
					}
				}
			}

			this.logging.LogEvent("Captured image processed by Cognitive Services", imageInformation);
		}
	}
	catch (Exception ex)
	{
		this.logging.LogMessage("Camera photo or save failed " + ex.Message, LoggingLevel.Error);
	}
	finally
	{
		this.cameraBusy = false;
	}
}

private void TimerCallback(object state)
{
	this.displayGpioPin.Write(GpioPinValue.Low);
}

This is the image uploaded to the Cognitive Services Vision Face API from my DragonBoard 410C

Which was a photo of this sample image displayed on my second monitor

The debugging output of the application includes the bounding box, gender, age and unique identifier of each detected face.

Digital Input Interrupt 24 triggered RisingEdge
Digital Input Interrupt 24 triggered FallingEdge
Count 13
Face
 Id:41ab8a38-180e-4b63-ab47-d502b8534467
 L:12 W:51 T:129 H:51
 Gender:Female
 Age:24.0
Face
 Id:554f7557-2b78-4392-9c73-5e51fedf0300
 L:115 W:48 T:146 H:48
 Gender:Female
 Age:19.0
Face
 Id:f67ae4cc-1129-46a8-8c5b-0e79f350cbaa
 L:547 W:46 T:162 H:46
 Gender:Female
 Age:56.0
Face
 Id:fad453fb-0923-4ae2-8c9d-73c9d89eaaf4
 L:585 W:45 T:116 H:45
 Gender:Female
 Age:25.0
Face
 Id:c2d2ca4e-faa6-49e8-8cd9-8d21abfc374c
 L:410 W:44 T:154 H:44
 Gender:Female
 Age:23.0
Face
 Id:6fb75edb-654c-47ff-baf0-847a31d2fd85
 L:70 W:44 T:57 H:44
 Gender:Male
 Age:37.0
Face
 Id:d6c97a9a-c49f-4d9c-8eac-eb2fbc03abc1
 L:469 W:44 T:122 H:44
 Gender:Female
 Age:38.0
Face
 Id:e193bf15-6d8c-4c30-adb5-4ca5fb0f0271
 L:206 W:44 T:117 H:44
 Gender:Male
 Age:33.0
Face
 Id:d1ba5a42-0475-4b65-afc8-0651439e1f1e
 L:293 W:44 T:74 H:44
 Gender:Male
 Age:59.0
Face
 Id:b6a7c551-bdad-4e38-8976-923b568d2721
 L:282 W:43 T:144 H:43
 Gender:Female
 Age:28.0
Face
 Id:8be87f6d-7350-4bc3-87f5-3415894b8fac
 L:513 W:42 T:78 H:42
 Gender:Male
 Age:36.0
Face
 Id:e73bd4d7-81a4-403c-aa73-1408ae1068c0
 L:163 W:36 T:94 H:36
 Gender:Female
 Age:44.0
Face
 Id:462a6948-a05e-4fea-918d-23d8289e0401
 L:407 W:36 T:73 H:36
 Gender:Male
 Age:27.0
The thread 0x8e0 has exited with code 0 (0x0).

I used a simple infrared proximity sensor trigger the image capture to simulate an application for monitoring the number of people in or people entering a room.

Infrared Proximity Sensor triggered Face API test client

Overall I found that with not a lot of code I could capture an image, upload it to Azure Cognitive Services Face API for processing and the algorithm would reasonably reliably detect faces and features.

Windows 10 IoT Core TPM SAS Token Expiry

This is for people who were searching for why the SAS token issued by the TPM on their Windows 10 IoT Core device is expiring much quicker than expected or might have noticed that something isn’t quite right with the “validity” period. (as at early May 2019). If you want to “follow along at home” the code I used is available on GitHub.

I found the SAS key was expiring in roughly 5 minutes and the validity period in the configuration didn’t appear to have any effect on how long the SAS token was valid.

10:04:16 Application started
...
10:04:27 SAS token needs renewing
10:04:30 SAS token renewed 
 10:04:30.984 AzureIoTHubClient SendEventAsync starting
 10:04:36.709 AzureIoTHubClient SendEventAsync starting
The thread 0x1464 has exited with code 0 (0x0).
 10:04:37.808 AzureIoTHubClient SendEventAsync finished
 10:04:37.808 AzureIoTHubClient SendEventAsync finished
The thread 0xb88 has exited with code 0 (0x0).
The thread 0x1208 has exited with code 0 (0x0).
The thread 0x448 has exited with code 0 (0x0).
The thread 0x540 has exited with code 0 (0x0).
 10:04:46.763 AzureIoTHubClient SendEventAsync starting
 10:04:47.051 AzureIoTHubClient SendEventAsync finished
The thread 0x10d8 has exited with code 0 (0x0).
The thread 0x6e0 has exited with code 0 (0x0).
The thread 0xf7c has exited with code 0 (0x0).
 10:04:56.808 AzureIoTHubClient SendEventAsync starting
 10:04:57.103 AzureIoTHubClient SendEventAsync finished
The thread 0xb8c has exited with code 0 (0x0).
The thread 0xc60 has exited with code 0 (0x0).
 10:05:06.784 AzureIoTHubClient SendEventAsync starting
 10:05:07.057 AzureIoTHubClient SendEventAsync finished
...
The thread 0x4f4 has exited with code 0 (0x0).
The thread 0xe10 has exited with code 0 (0x0).
The thread 0x3c8 has exited with code 0 (0x0).
 10:09:06.773 AzureIoTHubClient SendEventAsync starting
 10:09:07.044 AzureIoTHubClient SendEventAsync finished
The thread 0xf70 has exited with code 0 (0x0).
The thread 0x1214 has exited with code 0 (0x0).
 10:09:16.819 AzureIoTHubClient SendEventAsync starting
 10:09:17.104 AzureIoTHubClient SendEventAsync finished
The thread 0x1358 has exited with code 0 (0x0).
The thread 0x400 has exited with code 0 (0x0).
 10:09:26.802 AzureIoTHubClient SendEventAsync starting
 10:09:27.064 AzureIoTHubClient SendEventAsync finished
The thread 0x920 has exited with code 0 (0x0).
The thread 0x1684 has exited with code 0 (0x0).
The thread 0x4ec has exited with code 0 (0x0).
 10:09:36.759 AzureIoTHubClient SendEventAsync starting
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Net.Requests.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'backgroundTaskHost.exe' (CoreCLR: CoreCLR_UWP_Domain): Loaded 'C:\Data\Programs\WindowsApps\Microsoft.NET.CoreFramework.Debug.2.2_2.2.27505.2_arm__8wekyb3d8bbwe\System.Net.WebSockets.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
Sending payload to AzureIoTHub failed:CONNECT failed: RefusedNotAuthorized

I went and looked at the NuGet package details and it seemed a bit old.

I have the RedGate Reflector plugin installed on my development box so I quickly disassembled the Microsoft.Devices.TPM assembly to see what was going on. The Reflector code is pretty readable and it wouldn’t take much “refactoring” to get it looking like “human” generated code.

public string GetSASToken(uint validity = 0xe10)
{
    string deviceId = this.GetDeviceId();
    string hostName = this.GetHostName();
    long num = (DateTime.get_Now().ToUniversalTime().ToFileTime() / 0x98_9680L) - 0x2_b610_9100L;
    string str3 = "";
    if ((hostName.Length > 0) && (deviceId.Length > 0))
    {
        object[] objArray1 = new object[] { hostName, "/devices/", deviceId, "\n", (long) num };
        byte[] bytes = new UTF8Encoding().GetBytes(string.Concat((object[]) objArray1));
        byte[] buffer2 = this.SignHmac(bytes);
        if (buffer2.Length != 0)
        {
            string str5 = this.AzureUrlEncode(Convert.ToBase64String(buffer2));
            object[] objArray2 = new object[] { "SharedAccessSignature sr=", hostName, "/devices/", deviceId, "&sig=", str5, "&se=", (long) num };
            str3 = string.Concat((object[]) objArray2);
        }
    }
    return str3;
}

The validity parameter appears to not used. Below is the current code from the Azure IoT CSharp SDK on GitHub repository and they are different, the validity is used.

public string GetSASToken(uint validity = 3600)
{
   const long WINDOWS_TICKS_PER_SEC = 10000000;
   const long EPOCH_DIFFERNECE = 11644473600;
   string deviceId = GetDeviceId();
   string hostName = GetHostName();
   long expirationTime = (DateTime.Now.ToUniversalTime().ToFileTime() / WINDOWS_TICKS_PER_SEC) - EPOCH_DIFFERNECE;
   expirationTime += validity;
   string sasToken = "";
   if ((hostName.Length > 0) && (deviceId.Length > 0))
   {
      // Encode the message to sign with the TPM
      UTF8Encoding utf8 = new UTF8Encoding();
      string tokenContent = hostName + "/devices/" + deviceId + "\n" + expirationTime;
      Byte[] encodedBytes = utf8.GetBytes(tokenContent);

      // Sign the message
      Byte[] hmac = SignHmac(encodedBytes);

      // if we got a signature foramt it
      if (hmac.Length > 0)
      {
         // Encode the output and assemble the connection string
         string hmacString = AzureUrlEncode(System.Convert.ToBase64String(hmac));
         sasToken = "SharedAccessSignature sr=" + hostName + "/devices/" + deviceId + "&sig=" + hmacString + "&se=" + expirationTime;
         }
   }
   return sasToken;
}

I went back and look at the Github history and it looks like a patch was applied after the NuGet packages were released in May 2016.

If you read from the TPM and get nothing make sure you’re using the right TPM slot number and have “System Management” checked in the capabilities tab of the application manifest.

I’m still not certain the validity is being applied correctly and will dig into in a future post.

Azure IOT Hub nRF24L01 Windows 10 IoT Core Field Gateway with BorosRF2

A couple of BorosRF2 Dual nRF24L01 Hats arrived earlier in the week. After some testing with my nRF24L01 Test application I have added compile-time configuration options for the two nRF24L01 sockets to my Azure IoT Hub nRF24L01 Field Gateway.

Boros RF2 with Dual nRF24L01 devices
public sealed class StartupTask : IBackgroundTask
{
   private const string ConfigurationFilename = "config.json";

   private const byte MessageHeaderPosition = 0;
   private const byte MessageHeaderLength = 1;

   // nRF24 Hardware interface configuration
#if CEECH_NRF24L01P_SHIELD
   private const byte RF24ModuleChipEnablePin = 25;
   private const byte RF24ModuleChipSelectPin = 0;
   private const byte RF24ModuleInterruptPin = 17;
#endif

#if BOROS_RF2_SHIELD_RADIO_0
   private const byte RF24ModuleChipEnablePin = 24;
   private const byte RF24ModuleChipSelectPin = 0;
   private const byte RF24ModuleInterruptPin = 27;
#endif

#if BOROS_RF2_SHIELD_RADIO_1
   private const byte RF24ModuleChipEnablePin = 25;
   private const byte RF24ModuleChipSelectPin = 1;
   private const byte RF24ModuleInterruptPin = 22;
#endif

private readonly LoggingChannel logging = new LoggingChannel("devMobile Azure IotHub nRF24L01 Field Gateway", null, new Guid("4bd2826e-54a1-4ba9-bf63-92b73ea1ac4a"));
private readonly RF24 rf24 = new RF24();

This version supports one nRF24L01 device socket active at a time.

Enabling both nRF24L01 device sockets broke outbound message routing in a prototype branch with cloud to device(C2D) messaging support. This functionality is part of an Over The Air(OTA) device provisioning implementation I’m working o.