.NET Core web API + Dapper – MultiMapping

Shaping recordsets with SplitOn

Sometimes there is no easy way to build a “list of lists” using the contents of multiple database tables. I have run into this problem a few times especially when building webby services which query the database of a “legacy” (aka. production) system.

Retrieving a list of StockGroups and their StockItems from the World Wide Importers database was one of the better “real world” examples I could come up with.

SQL Server Management Studio Diagram showing relationships of tables

There is a fair bit of duplication (StockGroupID, StockGroupName) in the results set

SQL Server Management Studio StockItems-StockItemStockGroups-StockGroups query and results

There were 442 rows in the results set and 227 StockItems in the database so I ordered the query results by StockItemID and confirmed that there were many StockItems in several StockGroups.

public class StockItemListDtoV1
{
	public int Id { get; set; }

	public string Name { get; set; }

	public decimal RecommendedRetailPrice { get; set; }

	public decimal TaxRate { get; set; }
}

public class StockGroupStockItemsListDto
{
	StockGroupStockItemsListDto()
	{
		StockItems = new List<StockItemListDto>();
	}

	public int StockGroupID { get; set; }

	public string StockGroupName { get; set; }

	public List<StockItemListDto> StockItems { get; set; }
}

My initial version uses a Generic List for a StockGroup’s StockItems which is most probably not a good idea.

[Route("api/[controller]")]
[ApiController]
public class InvoiceQuerySplitOnController : ControllerBase
{
	private readonly string connectionString;
	private readonly ILogger<InvoiceQuerySplitOnController> logger;

	public InvoiceQuerySplitOnController(IConfiguration configuration, ILogger<InvoiceQuerySplitOnController> logger)
	{
		this.connectionString = configuration.GetConnectionString("WorldWideImportersDatabase");

		this.logger = logger;
	}

	[HttpGet]
	public async Task<ActionResult<IAsyncEnumerable<StockGroupStockItemsListDto>>> Get()
	{
		IEnumerable<StockGroupStockItemsListDto> response = null;

		try
		{
			using (SqlConnection db = new SqlConnection(this.connectionString))
			{
				var stockGroups = await db.QueryAsync<StockGroupStockItemsListDto, StockItemListDto, StockGroupStockItemsListDto>(
					sql: @"SELECT [StockGroups].[StockGroupID] as 'StockGroupID'" +
								",[StockGroups].[StockGroupName]" +
								",[StockItems].StockItemID as 'ID'" +
								",[StockItems].StockItemName as 'Name'" +
								",[StockItems].TaxRate" +
								",[StockItems].RecommendedRetailPrice " +
							"FROM [Warehouse].[StockGroups] " +
							"INNER JOIN[Warehouse].[StockItemStockGroups] ON ([StockGroups].[StockGroupID] = [StockItemStockGroups].[StockGroupID])" +
							"INNER JOIN[Warehouse].[StockItems] ON ([Warehouse].[StockItemStockGroups].[StockItemID] = [StockItems].[StockItemID])",
					(stockGroup, stockItem) =>
					{
						// Not certain I think using a List<> here is a good idea...
						stockGroup.StockItems.Add(stockItem);
						return stockGroup;
					},
				splitOn: "ID",
				commandType: CommandType.Text);

			response = stockGroups.GroupBy(p => p.StockGroupID).Select(g =>
			{
				var groupedStockGroup = g.First();
				groupedStockGroup.StockItems = g.Select(p => p.StockItems.Single()).ToList();
				return groupedStockGroup;
			});
		}
	}
	catch (SqlException ex)
	{
		logger.LogError(ex, "Retrieving S, Invoice Lines or Stock Item Transactions");

		return this.StatusCode(StatusCodes.Status500InternalServerError);
	}

	return this.Ok(response);
}

The MultiMapper syntax always trips me up and it usually takes a couple of attempts to get it to work.

List of StockGroups with StockItems

I have extended my DapperTransient module adding WithRetry versions of the 14 MultiMapper methods.

.NET Core web API + Dapper – QueryMultiple

Returning multiple recordsets

My current “day job” is building applications for managing portfolios of foreign currency instruments. A portfolio can contain many different types of instrument (Forwards, Options, Swaps etc.). One of the “optimisations” we use is retrieving all the different types of instruments in a portfolio with one stored procedure call.

SQL Server Management Studio Dependency viewer

The closest scenario I could come up with using the World Wide Importers database was retrieving a summary of all the information associated with an Invoice for display on a single screen.

CREATE PROCEDURE [Sales].[InvoiceSummaryGetV1](@InvoiceID as int)
AS
BEGIN

SELECT [InvoiceID]
--        ,[CustomerID]
--        ,[BillToCustomerID]
		,[OrderID]
		,[Invoices].[DeliveryMethodID]
		,[DeliveryMethodName]
--        ,[ContactPersonID]
--        ,[AccountsPersonID]
		,[SalespersonPersonID] as SalesPersonID
		,[SalesPerson].[PreferredName] as SalesPersonName
--        ,[PackedByPersonID]
		,[InvoiceDate]
		,[CustomerPurchaseOrderNumber]
		,[IsCreditNote]
		,[CreditNoteReason]
		,[Comments]
		,[DeliveryInstructions]
--        ,[InternalComments]
--        ,[TotalDryItems]
--        ,[TotalChillerItems]
		,[DeliveryRun]
		,[RunPosition] as DeliveryRunPosition
		,[ReturnedDeliveryData] as DeliveryData
		,[ConfirmedDeliveryTime] as DeliveredAt
		,[ConfirmedReceivedBy] as DeliveredTo
--        ,[LastEditedBy]
--        ,[LastEditedWhen]
	FROM [Sales].[Invoices]
	INNER JOIN [Application].[People] as SalesPerson ON (Invoices.[SalespersonPersonID] = [SalesPerson].[PersonID])
	INNER JOIN [Application].[DeliveryMethods] as DeliveryMethod ON (Invoices.[DeliveryMethodID] = DeliveryMethod.[DeliveryMethodID])
WHERE ([Invoices].[InvoiceID] = @InvoiceID)

SELECT [InvoiceLineID]
      ,[InvoiceID]
      ,[StockItemID]
      ,[Description] as StockItemDescription
      ,[InvoiceLines].[PackageTypeID]
	  ,[PackageType].[PackageTypeName]
      ,[Quantity]
      ,[UnitPrice]
      ,[TaxRate]
      ,[TaxAmount]
--      ,[LineProfit]
      ,[ExtendedPrice]
--      ,[LastEditedBy]
--      ,[LastEditedWhen]
	FROM [Sales].[InvoiceLines]
		INNER JOIN [Warehouse].[PackageTypes] as PackageType ON ([PackageType].[PackageTypeID] = [InvoiceLines].[PackageTypeID])
WHERE ([InvoiceLines].[InvoiceID] = @InvoiceID)

SELECT [StockItemTransactionID]
      ,[StockItemTransactions].[StockItemID]
      ,StockItem.[StockItemName] as StockItemName
      ,[StockItemTransactions].[TransactionTypeID]
      ,[TransactionType].[TransactionTypeName]
--      ,[CustomerID]
--      ,[InvoiceID]
--      ,[SupplierID]
--      ,[PurchaseOrderID]
      ,[TransactionOccurredWhen] as TransactionAt
      ,[Quantity]
--      ,[LastEditedBy]
--      ,[LastEditedWhen]
	FROM [Warehouse].[StockItemTransactions]
	INNER JOIN [Warehouse].[StockItems] as StockItem ON ([StockItemTransactions].StockItemID = [StockItem].StockItemID)
	INNER JOIN [Application].[TransactionTypes] as TransactionType ON ([StockItemTransactions].[TransactionTypeID] = TransactionType.[TransactionTypeID])
	WHERE ([StockItemTransactions].[InvoiceID] = @InvoiceID)

END

The stored procedure returns 3 recordsets, a “summary” of the Order, a summary of the associated OrderLines and a summary of the associated StockItemTransactions.

public async Task<ActionResult<Model.InvoiceSummaryGetDtoV1>>Get([Range(1, int.MaxValue, ErrorMessage = "Invoice id must greater than 0")] int id)
{
	Model.InvoiceSummaryGetDtoV1 response = null;

	try
	{
		using (SqlConnection db = new SqlConnection(this.connectionString))
		{
			var invoiceSummary = await db.QueryMultipleWithRetryAsync("[Sales].[InvoiceSummaryGetV1]", param: new { InvoiceId = id }, commandType: CommandType.StoredProcedure);

			response = await invoiceSummary.ReadSingleOrDefaultWithRetryAsync<Model.InvoiceSummaryGetDtoV1>();
			if (response == default)
			{
				logger.LogInformation("Invoice:{0} not found", id);

				return this.NotFound($"Invoice:{id} not found");
			}

			response.InvoiceLines = (await invoiceSummary.ReadWithRetryAsync<Model.InvoiceLineSummaryListDtoV1>()).ToArray();

			response.StockItemTransactions = (await invoiceSummary.ReadWithRetryAsync<Model.StockItemTransactionSummaryListDtoV1>()).ToArray();
		}
	}
	catch (SqlException ex)
	{
		logger.LogError(ex, "Retrieving Invoice, Invoice Lines or Stock Item Transactions");

		return this.StatusCode(StatusCodes.Status500InternalServerError);
	}

	return this.Ok(response);
}

I use Google Chrome, Mozilla Firefox, Microsoft Edgeium, and Opera but the screen capture was done with FireFox mainly because it formats the Java Script Object Notation(JSON) response payloads nicely.

FireFox displaying Invoice Summary response

I had to extend the DapperTransient module to add SqlMapper extension (plus all the different overloads) retry methods.

.NET Core web API + Dapper – Caching

Response Cache

In the beginning this was long long post about In-memory caching, distributed caching, Response caching, Response caching with middleware and Object reuse with ObjectPool. As I was re-reading the post before publishing it I came to the realisation that these different caching approaches didn’t require Dapper.

I started again, but kept the first section as it covers one of the simplest possible approaches to caching using the [ResponseCache] attribute and VaryByQueryKeys.

[HttpGet("Response")]
[ResponseCache(Duration = StockItemsListResponseCacheDuration)]
public async Task<ActionResult<IAsyncEnumerable<Model.StockItemListDtoV1>>> GetResponse()
{
	IEnumerable<Model.StockItemListDtoV1> response = null;

	logger.LogInformation("Response cache load");

	try
	{
		response = await dapper.QueryAsync<Model.StockItemListDtoV1>(sql: @"SELECT [StockItemID] as ""ID"", [StockItemName] as ""Name"", [RecommendedRetailPrice], [TaxRate] FROM [Warehouse].[StockItems]", commandType: CommandType.Text);
	}
	catch (SqlException ex)
	{
		logger.LogError(ex, "Retrieving list of StockItems");

		return this.StatusCode(StatusCodes.Status500InternalServerError);
	}

	return this.Ok(response);
}

[HttpGet("ResponseVarying")]
[ResponseCache(Duration = StockItemsListResponseCacheDuration, VaryByQueryKeys = new string[] { "id" })]
public async Task<ActionResult<Model.StockItemGetDtoV1>> Get([FromQuery(Name = "id"), Range(1, int.MaxValue, ErrorMessage = "Stock item id must greater than 0")] int id)
{
	Model.StockItemGetDtoV1 response = null;

	logger.LogInformation("Response cache varying load id:{0}", id);

	try
	{
		response = await dapper.QuerySingleOrDefaultAsync<Model.StockItemGetDtoV1>(sql: "[Warehouse].[StockItemsStockItemLookupV1]", param: new { stockItemId = id }, commandType: CommandType.StoredProcedure);
		if (response == default)
		{
			logger.LogInformation("StockItem:{0} not found", id);

			return this.NotFound($"StockItem:{id} not found");
		}
	}
	catch (SqlException ex)
	{
		logger.LogError(ex, "Looking up StockItem with Id:{0}", id);

		return this.StatusCode(StatusCodes.Status500InternalServerError);
	}

	return this.Ok(response);
}

I use Google Chrome, Mozilla Firefox, Microsoft Edgeium, and Opera but the screen captures have been done with FireFox mainly because it formats the Java Script Object Notation(JSON) response payloads nicely.

All the browsers appeared to respect the cache control headers but Firefox was the only one which did not initiate a new request when I pressed return in the Uniform Resource Locator(URL) field.

Firefox displaying list of stock items

I used Telerik Fiddler and FiddlerFox to capture the HTTP GET method request and response payloads.

Fiddler Fox extension details
Response payload for a list of StockItems with cache control headers highlighted
Firefox displaying single stock item
Response payload for a single StockItem with cache control headers highlighted

Dapper Cache

The Dapper Extensions Library has built in support for In-memory and Redis caching. The Dapper.Extensions Library extends Dapper’s functionality. It requires minimal configuration but I was tripped up by the default connection string requirement because I was using Dependency Injection

Dapper.Extensions NuGet package configuration

The configuration code in the application startup.cs supports in-memory and Redis caches.

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
	services.AddControllers();

	services.AddResponseCaching();

	services.AddDapperForMSSQL();

#if DAPPER_EXTENSIONS_CACHE_MEMORY
	services.AddDapperCachingInMemory(new MemoryConfiguration
	{
		AllMethodsEnableCache = false
	});
#endif
#if DAPPER_EXTENSIONS_CACHE_REDIS
	services.AddDapperCachingInRedis(new RedisConfiguration
	{
		AllMethodsEnableCache = false,
		KeyPrefix = Configuration.GetConnectionString("RedisKeyPrefix"),
		ConnectionString = Configuration.GetConnectionString("RedisConnection")
	}); 
#endif
	services.AddApplicationInsightsTelemetry();
}

The StockItemsCachingController was rewritten with the Dapper.Extensions QueryAsync and QuerySingleOrDefaultAsync methods.

[HttpGet("DapperMemory")]
public async Task<ActionResult<IAsyncEnumerable<Model.StockItemListDtoV1>>> GetDapper()
{
	List<Model.StockItemListDtoV1> response;

	logger.LogInformation("Dapper cache load");

	try
	{
		response = await dapper.QueryAsync<Model.StockItemListDtoV1>(
							sql: @"SELECT [StockItemID] as ""ID"", [StockItemName] as ""Name"", [RecommendedRetailPrice], [TaxRate] FROM [Warehouse].[StockItems]",
							commandType: CommandType.Text,
							enableCache: true,
							cacheExpire: TimeSpan.Parse(this.Configuration.GetValue<string>("DapperCachingDuration"))
					);

	}
	catch (SqlException ex)
	{
		logger.LogError(ex, "Retrieving list of StockItems");

		return this.StatusCode(StatusCodes.Status500InternalServerError);
	}

	return this.Ok(response);
}

[HttpGet("DapperMemoryVarying")]
public async Task<ActionResult<Model.StockItemGetDtoV1>> GetDapperVarying([FromQuery(Name = "id"), Range(1, int.MaxValue, ErrorMessage = "Stock item id must greater than 0")] int id)
{
	Model.StockItemGetDtoV1 response = null;

	logger.LogInformation("Dapper cache varying load id:{0}", id);

	try
	{
		response = await dapper.QuerySingleOrDefaultAsync<Model.StockItemGetDtoV1>(
					sql: "[Warehouse].[StockItemsStockItemLookupV1]",
					param: new { stockItemId = id },
					commandType: CommandType.StoredProcedure,
					cacheKey: $"StockItem:{id}",
					enableCache: true,
					cacheExpire: TimeSpan.Parse(this.Configuration.GetValue<string>("DapperCachingDuration"))
							);
		if (response == default)
		{
			logger.LogInformation("StockItem:{0} not found", id);

			return this.NotFound($"StockItem:{id} not found");
		}
	}
	catch (SqlException ex)
	{
		logger.LogError(ex, "Looking up StockItem with Id:{0}", id);

		return this.StatusCode(StatusCodes.Status500InternalServerError);
	}

	return this.Ok(response);
}

Both the Dapper.Extensions In-Memory and Redis cache reduced the number of database requests to the bare minimum. In a larger application the formatting of the cacheKey (cacheKey: “StockItems” & cacheKey: $”StockItem:{id}”) would be important to stop database query result collisions.

SQL Server Profiler displaying the list and single record requests.

I used Memurai which is a Microsoft Windows version of Redis for testing on my development machine before deploying to Microsoft Azure and using Azure Cache for Redis. Memurai runs as a Windows Service and supports master, replica, cluster node or sentinel roles.

Memurai running as a Windows Service on my development machine

When the Web API project was restarted the contents in-memory cache were lost. The Redis cache contents survive a restart and can be access from multiple clients.

The Dapper.Extensions Query, QueryAsync, QueryFirstOrDefaultAsync, QuerySingleOrDefault, QuerySingleOrDefaultAsync, QueryMultiple, QueryMultipleAsync, ExecuteReader, ExecuteReaderAsync, QueryPageAsync, QueryPageAsync, QueryPlainPage, QueryPlainPageAsync, Execute, ExecuteAsync, ExecuteScalar, ExecuteScalarAsync, BeginTransaction, CommitTransactionm and RollbackTransaction do not appear to a versions which “Retry” actions when there is a “Transient” failure. If there is no solution available I will build one using the approach in my DapperTransient module.

TTI V3 Connector Azure Storage Queues

The first Proof of Concept(PoC) for my updated The Things Industries(TTI) V3 Webhooks Integration was to explore the use of Azure Functions to securely ingest webhook calls. The aim was to have uplink and downlink message progress message payloads written to Azure Storage Queues with output bindings ready for processing.

namespace devMobile.IoT.TheThingsIndustries.HttpInputStorageQueueOutput
{
	using System.Net;
	using System.Threading.Tasks;

	using Microsoft.Azure.Functions.Worker;
	using Microsoft.Azure.Functions.Worker.Http;
	using Microsoft.Azure.WebJobs;
	using Microsoft.Extensions.Logging;


	[StorageAccount("AzureWebJobsStorage")]
	public static class Webhooks
	{
		[Function("Uplink")]
		public static async Task<HttpTriggerUplinkOutputBindingType> Uplink([HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestData req, FunctionContext context)
		{
			var logger = context.GetLogger("UplinkMessage");

			logger.LogInformation("Uplink processed");
			
			var response = req.CreateResponse(HttpStatusCode.OK);

			return new HttpTriggerUplinkOutputBindingType()
			{
				Name = await req.ReadAsStringAsync(),
				HttpReponse = response
			};
		}

		public class HttpTriggerUplinkOutputBindingType
		{
			[QueueOutput("uplink")]
			public string Name { get; set; }

			public HttpResponseData HttpReponse { get; set; }
		}

...

		[Function("Failed")]
		public static async Task<HttpTriggerFailedOutputBindingType> Failed([HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequestData req, FunctionContext context)
		{
			var logger = context.GetLogger("Failed");

			logger.LogInformation("Failed procssed");

			var response = req.CreateResponse(HttpStatusCode.OK);

			return new HttpTriggerFailedOutputBindingType()
			{
				Name = await req.ReadAsStringAsync(),
				HttpReponse = response
			};
		}

		public class HttpTriggerFailedOutputBindingType
		{
			[QueueOutput("failed")]
			public string Name { get; set; }

			public HttpResponseData HttpReponse { get; set; }
		}
	}
}

After some initial problems with the use of Azure Storage Queue output bindings to insert messages into the ack, nak, failed, queued, and uplink Azure Storage Queues I found it didn’t take much code and worked reliably on my desktop.

Azure Functions Desktop Development environment running my functions

I used Telerik Fiddler with some sample payloads to test my application.

Telerik Fiddler Request Composer “posting” sample message to desktop endpoint

Once the functions were running reliably on my desktop, I created an Azure Service Plan, deployed the code, then generated an API Key for securing my HTTPTrigger endpoints.

Azure Functions Host Key configuration dialog

I then added a TTI Webhook Integration to my TTI SeeduinoLoRaWAN application, manually configured the endpoint, enabled the different messages I wanted to process and set the x-functions-key header.

TTI Application Webhook configuration

After a short delay I could see messages in the message uplink queue with Azure Storage Explorer

Azure Storage Explorer displaying content of my uplink queue

Building a new version of my TTIV3 Azure IoT connector is a useful learning exercise but I’m still deciding whether is it worth the effort as TTI has one now?

TTN V3 Connector Revisited

Earlier in the year I built Things Network(TTN) V2 and V3 connectors and after using these in production applications I have learnt a lot about what I had got wrong, less wrong and what I had got right.

Using a TTN V3 MQTT Application integration wasn’t a great idea. The management of state was very complex. The storage of application keys in a app.settings file made configuration easy but was bad for security.

The use of Azure Key Vault in the TTNV2 connector was a good approach, but the process of creation and updating of the settings needs to be easier.

Using TTN device registry as the “single source of truth” was a good decision as managing the amount of LoRaWAN network, application and device specific configuration in an Azure IoT Hub would be non-trivial.

Using a Webhooks Application Integration like the TTNV2 connector is my preferred approach.

The TTNV2 Connector’s use of Azure Storage Queues was a good idea as they it provide an elastic buffer between the different parts of the application.

The use of Azure Functions to securely ingest webhook calls and write them to Azure Storage Queues with output bindingts should simplify configuration and deployment. The use of Azure Storage Queue input bindings to process messages is the preferred approach.

The TTN V3 processing of JSON uplink messages into a structure that Azure IoT Central could ingest is a required feature

The TTN V2 and V3 support for the Azure Device Provisioning Service(DPS) is a required feature (mandated by Azure IoT Central). The TTN V3 connector support for DTDLV2 is a desirable feature. The DPS implementation worked with Azure IoT Central but I was unable to get the DeviceClient based version working.

Using DPS to pre-provision devices in Azure IoT Hubs and Azure IoT Central by using the TTN Application Registry API then enumerating the TTN applications, then devices needs to be revisited as it was initially slow then became quite complex.

The support for Azure IoT Hub connection strings was a useful feature, but added some complexity. This plus basic Azure IoT Hub DPS support(No Azure IoT Central support) could be implemented in a standalone application which connects via Azure Storage Queue messages.

The processing of Azure IoT Central Basic, and Request commands then translating the payloads so they work with TTN V3 is a required feature. The management of Azure IoT Hub command delivery confirmations (abandon, complete and Reject) is a required feature.

I’m considering building a new TTN V3 connector but is it worth the effort as TTN has one now?

Security Camera Azure IoT Hub Image upload

The final two projects of this series both upload images to the Azure Storage account associated with an Azure IoT Hub. One project uses a Timer to upload pictures with a configurable delay. The other uploads an image every time a General Purpose Input Output(GPIO) pin on the Raspberry PI3 is strobed.

Uniview IPC3635SB-ADZK-I0 Security camera test rig with Raspberry PI and PIR motion detector

I tried to keep the .Net Core 5 console applications as simple as possible, they download an image from the camera “snapshot” endpoint (In this case http://10.0.0.47:85/images/snapshot.jpg), save it to the local filesystem and then upload it.

The core of the two applications is the “upload” image method, which is called by a timer or GPIO pin EventHandler

private static async void ImageUpdateTimerCallback(object state)
{
	CommandLineOptions options = (CommandLineOptions)state;
	DateTime requestAtUtc = DateTime.UtcNow;

	// Just incase - stop code being called while retrival of the photo already in progress
	if (cameraBusy)
	{
		return;
	}
	cameraBusy = true;

	Console.WriteLine($"{requestAtUtc:yy-MM-dd HH:mm:ss} Image up load start");

	try
	{
		// First go and get the image file from the camera onto local file system
		using (var client = new WebClient())
		{
			NetworkCredential networkCredential = new NetworkCredential()
			{
				UserName = options.UserName,
				Password = options.Password
			};

			client.Credentials = networkCredential;

			await client.DownloadFileTaskAsync(new Uri(options.CameraUrl), options.LocalFilename);
		}

		// Then open the file ready to stream ito upto storage account associated with Azuure IoT Hub
		using (FileStream fileStreamSource = new FileStream(options.LocalFilename, FileMode.Open))
		{
			var fileUploadSasUriRequest = new FileUploadSasUriRequest
			{
				BlobName = string.Format("{0:yyMMdd}/{0:yyMMddHHmmss}.jpg", requestAtUtc)
			};

			// Get the plumbing sorted for where the file is going in Azure Storage
			FileUploadSasUriResponse sasUri = await azureIoTCentralClient.GetFileUploadSasUriAsync(fileUploadSasUriRequest);
			Uri uploadUri = sasUri.GetBlobUri();

			try
			{
				var blockBlobClient = new BlockBlobClient(uploadUri);

				var response = await blockBlobClient.UploadAsync(fileStreamSource, new BlobUploadOptions());

				var successfulFileUploadCompletionNotification = new FileUploadCompletionNotification()
				{
					// Mandatory. Must be the same value as the correlation id returned in the sas uri response
					CorrelationId = sasUri.CorrelationId,

					// Mandatory. Will be present when service client receives this file upload notification
					IsSuccess = true,

					// Optional, user defined status code. Will be present when service client receives this file upload notification
					StatusCode = 200,

					// Optional, user-defined status description. Will be present when service client receives this file upload notification
					StatusDescription = "Success"
				};

				await azureIoTCentralClient.CompleteFileUploadAsync(successfulFileUploadCompletionNotification);
			}
			catch (Exception ex)
			{
				Console.WriteLine($"Failed to upload file to Azure Storage using the Azure Storage SDK due to {ex}");

				var failedFileUploadCompletionNotification = new FileUploadCompletionNotification
				{
					// Mandatory. Must be the same value as the correlation id returned in the sas uri response
					CorrelationId = sasUri.CorrelationId,

					// Mandatory. Will be present when service client receives this file upload notification
					IsSuccess = false,

					// Optional, user-defined status code. Will be present when service client receives this file upload notification
					StatusCode = 500,

					// Optional, user defined status description. Will be present when service client receives this file upload notification
					StatusDescription = ex.Message
				};

				await azureIoTCentralClient.CompleteFileUploadAsync(failedFileUploadCompletionNotification);
			}
		}

		TimeSpan uploadDuration = DateTime.UtcNow - requestAtUtc;

		Console.WriteLine($"{requestAtUtc:yy-MM-dd HH:mm:ss} Image up load done. Duration:{uploadDuration.TotalMilliseconds:0.} mSec");
	}
	catch (Exception ex)
	{
		Console.WriteLine($"Camera image upload process failed {ex.Message}");
	}
	finally
	{
		cameraBusy = false;
	}
}

I have used Azure DeviceClient UploadToBlobAsync in other projects and it was a surprise to see it deprecated and replaced with GetFileUploadSasUriAsync and GetBlobUri with sample code from the development team.

string blobName = string.Format("{0:yyMMdd}/{0:yyMMddHHmmss}.jpg", requestAtUtc);

azureIoTCentralClient.UploadToBlobAsync(blobName, fileStreamSource);

It did seem to take a lot of code to implement what was previously a single line (I’m going try and find out why this method has been deprecated)

TImer application image uploader

Using Azure Storage Explorer I could view and download the images uploaded by the application(s) running on my development machine and Raspberry PI

Azure Storage Displaying most recent image uploaded by a RaspberryPI device

After confirming the program was working I used the excellent RaspberryDebugger to download the application and debug it on my Raspberry PI 3 running the Raspberry PI OS.

Now that the basics are working my plan is to figure out how to control the camera using Azure IoT Hub method calls, display live Real Time Streaming Protocol(RTSP) using Azure IoT Hub Device Streams, upload images to Azure Cognitive Services for processing and use ML.Net to process them locally.

Security Camera HTTP Image download

As part of a contract a customer sent me a Uniview IPC3635SB-ADZK-I0 Security camera for a proof of concept(PoC) project. Before the PoC I wanted to explore the camera functionality in more depth, especially how to retrieve individual images from the camera, remotely control it’s zoom, focus, pan, tilt etc.. I’m trying to source a couple of other vendors’ security cameras with remotely controllable pan and tilt for testing.

Uniview IPC3635SB-ADZK-I0 Security camera

It appears that many cameras support retrieving the latest image with an HyperText Transfer Protocol (HTTP) GET so that looked like a good place to start. For the next couple of posts the camera will be sitting on the bookcase in my office looking through the window at the backyard.

Unv camera software live view of my backyard

One thing I did notice (then confirmed with Telerik Fiddler and in the camera configuration) was that the camera was configured to use Digest authentication(RFC 2069) which broke my initial attempt with a Universal Windows Platform(UWP) application.

Telerik Fiddler showing 401 authorisation challenge

My .Net Core 5 console application is as simple possible, it just downloads an image from the camera “snapshot” endpoint (In this case http://10.0.0.47:85/images/snapshot.jpg) and saves it to the local filesystem.

class Program
{
	static async Task Main(string[] args)
	{
		await Parser.Default.ParseArguments<CommandLineOptions>(args)
			.WithNotParsed(HandleParseError)
			.WithParsedAsync(ApplicationCore);
	}

	private static async Task ApplicationCore(CommandLineOptions options)
	{
		Console.WriteLine($"Camera:{options.CameraUrl} UserName:{options.UserName} filename:{options.Filename}");

		using (var client = new WebClient())
		{
			NetworkCredential networkCredential = new NetworkCredential()
			{
				UserName = options.UserName,
				Password = options.Password
			};

			client.Credentials = networkCredential;

			try
			{
				await client.DownloadFileTaskAsync(new Uri(options.CameraUrl), options.Filename);
			}
			catch (Exception ex)
			{
				Console.WriteLine($"File download failed {ex.Message}");
			}
		}

		Console.WriteLine("Press <enter> to exit");
		Console.ReadLine();
	}

	private static void HandleParseError(IEnumerable<Error> errors)
	{
		if (errors.IsVersion())
		{
			Console.WriteLine("Version Request");
			return;
		}

		if (errors.IsHelp())
		{
			Console.WriteLine("Help Request");
			return;
		}
		Console.WriteLine("Parser Fail");
	}
}

After confirming the program was working I used the excellent RaspberryDebugger to download the application and debug it on a Raspberry PI 3 running the Raspberry PI OS.

Visual Studio 2019 Debug Output showing application download process

Once the application had finished running on the device I wanted to check that the file was on the local filesystem. I used Putty to connect to the Raspberry PI then searched for LatestImage.jpg.

Linux find utility displaying the location of the downloaded file

I though about using a utility like scp to download the image file but decided (because I have been using Microsoft Window since WIndows 286) to install xrdp an open-source Remote Desktop Protocol(RDP) server so I could use a Windows 10 RDP client.

xrdp login screen
xrdp home screen
xrdp file manager display files in application deployment directory
Raspberry PI OS default image view

Now that the basics are working my plan is to figure out how to control the camera, display live video with the Real Time Streaming Protocol(RTSP) upload images to Azure Cognitive Services for processing and use ML.Net to process them locally.

This post was about selecting the tooling I’m comfortable with and configuring my development environment so they work well together. The next step will be using Open Network Video Interface Forum (ONVIF) to discover, determine the capabilities of and then control the camera (for this device just zoom and focus).

.NET Core web API + Dapper – Web Caching

Web cache validation with eTags

On a couple of the systems I work on there are a number of queries (often complex spatial searches) which are very resource intensive but are quite readily cached. In these systems we have used HTTP GET and HEAD Request methods together so that the client only re-GETs the query results after a HEAD method indicates there have been updates.

I have been trying to keep the number of changes to my Microsoft SQL Azure World Wide Importers database to a minimum but for this post I have added a rowversion column to the StockGroups table. The rowversion data type is an automatically generated, unique 8 byte binary(12 bytes Base64 encoded) number within a database.

StockGroups table with Version column

Adding a rowversion table to an existing System Versioned table in the SQL Server Management Studio Designer is painful so I used…

ALTER TABLE [Warehouse].[StockGroups] ADD [Version] [timestamp] NULL

To reduce complexity the embedded SQL is contains two commands (normally I wouldn’t do this) one for retrieving the list StockGroups the other for retrieving the maximum StockGroup rowversion. If a StockGroup is changed the rowversion will be “automagically” updated and the maximum value will change.

[HttpGet]
public async Task<ActionResult<IAsyncEnumerable<Model.StockGroupListDtoV1>>> Get()
{
	IEnumerable<Model.StockGroupListDtoV1> response = null;

	try
	{
		using (SqlConnection db = new SqlConnection(this.connectionString))
		{
			var parameters = new DynamicParameters();

			parameters.Add("@RowVersion", dbType: DbType.Binary, direction: ParameterDirection.Output, size: ETagBytesLength);

			response = await db.QueryAsync<Model.StockGroupListDtoV1>(sql: @"SELECT [StockGroupID] as ""ID"", [StockGroupName] as ""Name""FROM [Warehouse].[StockGroups] ORDER BY Name; SELECT @RowVersion=MAX(Version) FROM [Warehouse].[StockGroups]", param: parameters, commandType: CommandType.Text);

			if (response.Any())
			{
				byte[] rowVersion = parameters.Get<byte[]>("RowVersion");

				this.HttpContext.Response.Headers.Add("ETag", Convert.ToBase64String(rowVersion));
			}
		}
	}
	catch (SqlException ex)
	{
		logger.LogError(ex, "Retrieving list of StockGroups");

		return this.StatusCode(StatusCodes.Status500InternalServerError);
	}

	return this.Ok(response);
}

I used Telerik Fiddler to to capture the GET response payload.

HTTP/1.1 200 OK
Transfer-Encoding: chunked
Content-Type: application/json; charset=utf-8
ETag: AAAAAAABrdE=
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Sat, 26 Jun 2021 06:12:16 GMT

136
[
   {"id":5,"name":"Airline Novelties"},
   {"id":2,"name":"Clothing"},
   {"id":6,"name":"Computing Novelties"},
   {"id":8,"name":"Furry Footwear"},
   {"id":3,"name":"Mugs"},
   {"id":1,"name":"Novelty Items"},
   {"id":10,"name":"Packaging Material"},
   {"id":9,"name":"Toys"},
   {"id":4,"name":"T-Shirts"},
   {"id":7,"name":"USB Novelties"}
]
0

The HEAD method requests the maximum rwoversion value from the StockGroups table and compares it to the eTag. In a more complex scenario this could be a call to a local cache to see if a query result has bee refreshed.

[HttpHead]
public async Task<ActionResult> Head([Required][FromHeader(Name = "ETag")][MinLength(ETagBase64Length, ErrorMessage = "eTag length invalid too short")][MaxLength(ETagBase64Length, ErrorMessage = "eTag length {0} invalid too long")] string eTag)
{
	byte[] headerVersion = new byte[ETagBytesLength];

	if (!Convert.TryFromBase64String(eTag, headerVersion, out _))
	{
		logger.LogInformation("eTag invalid format");

		return this.BadRequest("eTag invalid format");
	}

	try
	{
		using (SqlConnection db = new SqlConnection(this.connectionString))
		{
			byte[] databaseVersion = await db.ExecuteScalarAsync<byte[]>(sql: "SELECT MAX(Version) FROM [Warehouse].[StockGroups]", commandType: CommandType.Text);

			if (headerVersion.SequenceEqual(databaseVersion))
			{
				return this.StatusCode(StatusCodes.Status304NotModified);
			}
		}
	}
	catch (SqlException ex)
	{
		logger.LogError(ex, "Retrieving StockItem list");

		return this.StatusCode(StatusCodes.Status500InternalServerError);
	}

	return this.Ok();
}

I used Fiddler to to capture a HEAD response payload a 304 Not modified.

HTTP/1.1 304 Not Modified
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Sat, 26 Jun 2021 22:09:02 GMT

I then modified the database and the response changed to 200 OK indicating the local cache should be updated with a GET.

HTTP/1.1 200 OK
Transfer-Encoding: chunked
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Sat, 26 Jun 2021 22:09:59 GMT

This approach combined with the use of the If-Match, If-Modified-Since, If-None-Match and If-Unmodified-since allows web and client side caches to use previously requested results when there have been no changes. This can significantly reduce the amount of network traffic and server requests.

As part of my testing I modified the eTag so it was invalid (to check the Convert.ToBase64String and Convert.TryFromBase64String error handling) and the response was much smaller than I expected.

HTTP/1.1 400 Bad Request
Content-Length: 240
Content-Type: application/problem+json; charset=utf-8
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Sat, 26 Jun 2021 06:28:11 GMT

This was unlike the helpful validation messages returned by the GET method of the StockItems pagination example code

{
   "type":"https://tools.ietf.org/html/rfc7231#section-6.5.1",
   "title":"One or more validation errors occurred.",
   "status":400,
   "traceId":"00-bd68c94bf05f5c4ca8752011d6a60533-48e966211dec4847-00",
   "errors": 
   {
      "PageSize":["PageSize must be present and greater than 0"],
      "PageNumber":["PageNumber must be present and greater than 0"]
   }
}

The lack of diagnostic information was not helpful and I’ll explore this further in a future post. I often work on Fintech applications which are “insert only”, or nothing is deleted just marked as inactive/readonly so this approach is viable.

Azure Functions with VB.Net 4.X

As part of my “day job” I spend a lot of time working with C# and VB.Net 4.X “legacy” projects doing upgrades, bugs fixes and moving applications to Azure. For the last couple of months I have been working on a project replacing Microsoft message queue(MSMQ) queues with Azure Storage Queues so the solution is easier to deploy in Azure.

The next phase of the project is to replace a number of Windows Services with Azure Queue Trigger and Timer Trigger functions. The aim is a series of small steps which we can test before deployment rather than major changes, hence the use of V1 Azure functions for the first release.

Silver Fox systems sells a Visual Studio extension which generates an HTTP Trigger VB.Net project. I needed Timer and Queue Trigger functions so I created C# examples and then used them to figure out how to build VB.Net equivalents

Visual Studio Solution Explorer

After quite a few failed attempts I found this sequence worked for me

Add a new VB.Net class library
Provide a name for new class library
Select target framework

Even though the target platform is not .NET 5.0 ignore this and continue.

Microsoft.NET.Sdk.Functions

Added Microsoft.NET.Sdk.Functions (make sure version 1.0.38)

Visual Studio project with Azure Function Icon.

Then unload the project and open the file.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <RootNamespace>TimerClass</RootNamespace>
    <TargetFramework>net5.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.38" />
  </ItemGroup>

</Project>

Add the TargetFramework and AzureFunctionsVersion lines

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <RootNamespace>TimerClass</RootNamespace>
    <TargetFramework>net48</TargetFramework>
    <AzureFunctionsVersion>v1</AzureFunctionsVersion>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.38" />
  </ItemGroup>

</Project>

At this point the project should compile but won’t do much, so update the class to look like the code below.

Imports System.Threading

Imports Microsoft.Azure.WebJobs
Imports Microsoft.Extensions.Logging


Public Class TimerTrigger
   Shared executionCount As Int32

   <FunctionName("Timer")>
   Public Shared Sub Run(<TimerTrigger("0 */1 * * * *")> myTimer As TimerInfo, log As ILogger)
      Interlocked.Increment(executionCount)

      log.LogInformation("VB.Net TimerTrigger next trigger:{0} Execution count:{1}", myTimer.ScheduleStatus.Next, executionCount)

   End Sub
End Class

Then add an empty hosts.json file (make sure “copy if newer” is configured in properties) to the project directory, then depending on deployment model configure the AzureWebJobsStorage and AzureWebJobsDashboard connection strings via environment variables or a local.settings.json file.

Visual Studio Environment variables for AzureWebJobsStorage and AzureWebJobsDashboard connection strings

Blob Trigger Sample code

Imports System.IO
Imports System.Threading

Imports Microsoft.Azure.WebJobs
Imports Microsoft.Extensions.Logging


Public Class BlobTrigger
   Shared executionCount As Int32

   ' This function will get triggered/executed when a new message is written on an Azure Queue called events.
   <FunctionName("Notifications")>
   Public Shared Async Sub Run(<BlobTrigger("notifications/{name}", Connection:="BlobEndPoint")> payload As Stream, name As String, log As ILogger)
      Interlocked.Increment(executionCount)

      log.LogInformation("VB.Net BlobTrigger processed blob name:{0} Size:{1} bytes Execution count:{2}", name, payload.Length, executionCount)
   End Sub
End Class

HTTP Trigger Sample code

Imports System.Net
Imports System.Net.Http
Imports System.Threading

Imports Microsoft.Azure.WebJobs
Imports Microsoft.Azure.WebJobs.Extensions.Http
Imports Microsoft.Extensions.Logging


Public Class HttpTrigger
   Shared executionCount As Int32

   <FunctionName("Notifications")>
   Public Shared Async Function Run(<HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route:=Nothing)> req As HttpRequestMessage, log As ILogger) As Task(Of HttpResponseMessage)
      Interlocked.Increment(executionCount)

      log.LogInformation($"VB.Net HTTP trigger Execution count:{0} Method:{1}", executionCount, req.Method)

      Return New HttpResponseMessage(HttpStatusCode.OK)
   End Function
End Class

Queue Trigger Sample Code

Imports System.Threading

Imports Microsoft.Azure.WebJobs
Imports Microsoft.Extensions.Logging


Public Class QueueTrigger
   Shared ConcurrencyCount As Long
   Shared ExecutionCount As Long

   <FunctionName("Alerts")>
   Public Shared Sub ProcessQueueMessage(<QueueTrigger("notifications", Connection:="QueueEndpoint")> message As String, log As ILogger)
      Interlocked.Increment(ConcurrencyCount)
      Interlocked.Increment(ExecutionCount)

      log.LogInformation("VB.Net Concurrency:{0} Message:{1} Execution count:{2}", ConcurrencyCount, message, ExecutionCount)

      ' Wait for a bit to force some consurrency
      Thread.Sleep(5000)

      Interlocked.Decrement(ConcurrencyCount)
   End Sub
End Class

As well as counting the number of executions I also wanted to check that >1 instances were started to process messages when the queues had many messages. I added a “queues” section to the hosts.json file so I could tinker with the options.

{
  "queues": {
    "maxPollingInterval": 100,
    "visibilityTimeout": "00:00:05",
    "batchSize": 16,
    "maxDequeueCount": 5,
    "newBatchThreshold": 8
  }
}

The QueueMessageGenerator application inserts many messages into a queue for processing.

When I started the QueueTrigger function I could see the concurrency count was > 0

Timer Trigger Sample Code

Imports System.Threading

Imports Microsoft.Azure.WebJobs
Imports Microsoft.Extensions.Logging


Public Class TimerTrigger
   Shared executionCount As Int32

   <FunctionName("Timer")>
   Public Shared Sub Run(<TimerTrigger("0 */1 * * * *")> myTimer As TimerInfo, log As ILogger)
      Interlocked.Increment(executionCount)

      log.LogInformation("VB.Net TimerTrigger next trigger:{0} Execution count:{1}", myTimer.ScheduleStatus.Next, executionCount)

   End Sub
End Class

The source code for the C# and VB.Net functions is available on GitHub

The Things Network HTTP Azure IoT Hub Integration

This post provides an overview of the required Azure Device Provisioning Service(DPS) and Azure IoT Hub configuration to process The Things Network(TTN) HTTP integration uplink messages. I have assumed that the reader is already familiar with all these products. There is an overview of configuring TTN HTTP integration in my “Simplicating and securing the HTTP handler” post.

The first step is to configure a DPS Enrollment Group

DPS Group Enrollment blade

The scopeID and the primary/secondary key need to be configured in the appsettings.json file of uplink message processing Azure QueueTrigger function.

For more complex deployments the ApplicationEnrollmentGroupMapping configuration enables The Things Network(TTN) devices to be provisioned using different GroupEnrollment keys based on the applicationid in the first Uplink message which initiates provisoning.

"DeviceProvisioningService": {
      "GlobalDeviceEndpoint": "global.azure-devices-provisioning.net",
      "ScopeID": "",
      "EnrollmentGroupSymmetricKeyDefault": "TopSecretKey",
      "DeviceProvisioningPollingDelay": 500,
      "ApplicationEnrollmentGroupMapping": {
         "Application1": "TopSecretKey1",
         "Application2": "TopSecretKey2"
      }
   }

DPS Group Enrolment with no provisioned devices

Then as uplink messages from the TTN integration are processed devices are “automagically” created in the DPS.

Simultaneously devices are created in the Azure IoT Hub

Then shortly after telemetry events are available for applications to process or inspection with tools like Azure IoT Explorer.

In the telemetry event payload sent to the Azure IoT IoT Hub are some extra fields to help with debugging and tracing. The raw payload is also included so messages not decoded by TTN can be processed by the client application(s).

/ Assemble the JSON payload to send to Azure IoT Hub/Central.
log.LogInformation($"{messagePrefix} Payload assembly start");
JObject telemetryEvent = new JObject();
try
{
   JObject payloadFields = (JObject)payloadObect.payload_fields;
   telemetryEvent.Add("HardwareSerial", payloadObect.hardware_serial);
   telemetryEvent.Add("Retry", payloadObect.is_retry);
   telemetryEvent.Add("Counter", payloadObect.counter);
   telemetryEvent.Add("DeviceID", payloadObect.dev_id);
   telemetryEvent.Add("ApplicationID", payloadObect.app_id);
   telemetryEvent.Add("Port", payloadObect.port);
   telemetryEvent.Add("PayloadRaw", payloadObect.payload_raw);
   telemetryEvent.Add("ReceivedAtUTC", payloadObect.metadata.time);
 
   // If the payload has been unpacked in TTN backend add fields to telemetry event payload
   if (payloadFields != null)
   {
      foreach (JProperty child in payloadFields.Children())
      {
         EnumerateChildren(telemetryEvent, child);
      }
   }
}
catch (Exception ex)
{
   log.LogError(ex, $"{messagePrefix} Payload processing or Telemetry event assembly failed");
   throw;
}

Beware, the Azure Storage Account and storage queue names have a limited character set. This caused me problems several times when I used camel cased queue names etc.