YoloV8-One of these NuGets is not like the others

A couple of days after my initial testing the YoloV8 by dme-compunet NuGet was updated so I reran my test harnesses.

Then the YoloDotNet by NichSwardh NuGet was also updated so I reran my all my test harnesses again.

Even though the YoloV8 by sstainba NuGet hadn’t been updated I ran the test harness just incase.

The dme-compunet YoloV8 and NickSwardh YoloDotNet NuGets results are now the same (bar the 30% cutoff) and YoloV8 by sstainba results have not changed.

YoloV8-All of these NuGets are not like the others

As part investigating which YoloV8 NuGet to use, I built three trial applications using dme-compunet YoloV8, sstainba Yolov8.Net, and NickSwardh YoloDotNet NuGets.

All of the implementations load the model, load the sample image, detect objects in the image, then markup the image with the classification, minimum bounding boxes, and confidences of each object.

Input Image

The first implementation uses YoloV8 by dme-compunet which supports asynchronous operation. The image is loaded asynchronously, the prediction is asynchronous, then marked up and saved asynchronously.

using (var predictor = new Compunet.YoloV8.YoloV8(_applicationSettings.ModelPath))
{
   Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model load done");
   Console.WriteLine();

   using (var image = await SixLabors.ImageSharp.Image.LoadAsync<Rgba32>(_applicationSettings.ImageInputPath))
   {
      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect start");

      var predictions = await predictor.DetectAsync(image);

      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect done");
      Console.WriteLine();

      Console.WriteLine($" Speed: {predictions.Speed}");

      foreach (var prediction in predictions.Boxes)
      {
         Console.WriteLine($"  Class {prediction.Class} {(prediction.Confidence * 100.0):f1}% X:{prediction.Bounds.X} Y:{prediction.Bounds.Y} Width:{prediction.Bounds.Width} Height:{prediction.Bounds.Height}");
      }
      Console.WriteLine();

      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} Plot and save : {_applicationSettings.ImageOutputPath}");

      SixLabors.ImageSharp.Image imageOutput = await predictions.PlotImageAsync(image);

      await imageOutput.SaveAsJpegAsync(_applicationSettings.ImageOutputPath);
   }
}
dme-compunet YoloV8 test application output

The second implementation uses YoloDotNet by NichSwardh which partially supports asynchronous operation. The image is loaded asynchronously, the prediction is synchronous, the markup is synchronous, and then saved asynchronously.

using (var predictor = new Yolo(_applicationSettings.ModelPath, false))
{
   Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model load done");
   Console.WriteLine();

   using (var image = await SixLabors.ImageSharp.Image.LoadAsync<Rgba32>(_applicationSettings.ImageInputPath))
   {
      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect start");

      var predictions = predictor.RunObjectDetection(image);

      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect done");
      Console.WriteLine();

      foreach (var predicition in predictions)
      {
         Console.WriteLine($"  Class {predicition.Label.Name} {(predicition.Confidence * 100.0):f1}% X:{predicition.BoundingBox.Left} Y:{predicition.BoundingBox.Y} Width:{predicition.BoundingBox.Width} Height:{predicition.BoundingBox.Height}");
      }
      Console.WriteLine();

      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} Plot and save : {_applicationSettings.ImageOutputPath}");

      image.Draw(predictions);

      await image.SaveAsJpegAsync(_applicationSettings.ImageOutputPath);
   }
}
nickswardth YoloDotNet test application output

The third implementation uses YoloV8 by sstainba which partially supports asynchronous operation. The image is loaded asynchronously, the prediction is synchronous, the markup is synchronous, and then saved asynchronously.

using (var predictor = YoloV8Predictor.Create(_applicationSettings.ModelPath))
{
   Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model load done");
   Console.WriteLine();

   using (var image = await SixLabors.ImageSharp.Image.LoadAsync<Rgba32>(_applicationSettings.ImageInputPath))
   {
      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect start");

      var predictions = predictor.Predict(image);

      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect done");
      Console.WriteLine();

      foreach (var prediction in predictions)
      {
         Console.WriteLine($"  Class {prediction.Label.Name} {(prediction.Score * 100.0):f1}% X:{prediction.Rectangle.X} Y:{prediction.Rectangle.Y} Width:{prediction.Rectangle.Width} Height:{prediction.Rectangle.Height}");
      }

      Console.WriteLine();

      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} Plot and save : {_applicationSettings.ImageOutputPath}");

      // This is a bit hacky should be fixed up in future release
      Font font = new Font(SystemFonts.Get(_applicationSettings.FontName), _applicationSettings.FontSize);
      foreach (var prediction in predictions)
      {
         var x = (int)Math.Max(prediction.Rectangle.X, 0);
         var y = (int)Math.Max(prediction.Rectangle.Y, 0);
         var width = (int)Math.Min(image.Width - x, prediction.Rectangle.Width);
         var height = (int)Math.Min(image.Height - y, prediction.Rectangle.Height);

         //Note that the output is already scaled to the original image height and width.

         // Bounding Box Text
         string text = $"{prediction.Label.Name} [{prediction.Score}]";
         var size = TextMeasurer.MeasureSize(text, new TextOptions(font));

         image.Mutate(d => d.Draw(Pens.Solid(Color.Yellow, 2), new Rectangle(x, y, width, height)));

         image.Mutate(d => d.DrawText(text, font, Color.Yellow, new Point(x, (int)(y - size.Height - 1))));
      }

      await image.SaveAsJpegAsync(_applicationSettings.ImageOutputPath);
   }
}
sstainba YoloV8 test application output

I don’t understand why the three NuGets produced different results which is worrying.

ML.Net YoloV5 + Security Camera Revisited

This post is about “revisiting” my ML.Net YoloV5 + Camera on ARM64 Raspberry PI application, updating it to .NET 6, the latest version of the TechWings yolov5-net (library formerly from mentalstack) and the latest version of the ML.Net Open Neural Network Exchange(ONNX) libraries.

Visual Studio 2022 with updated NuGet packages

The updated TechWings yolov5-net library now uses Six Labors ImageSharp for markup rather than System.Drawing.Common. (I found System.Drawing.Common a massive Pain in the Arse (PiTA))

private static async void ImageUpdateTimerCallback(object state)
{
   DateTime requestAtUtc = DateTime.UtcNow;

   // Just incase - stop code being called while photo already in progress
   if (_cameraBusy)
   {
      return;
   }
   _cameraBusy = true;

   Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss} Image processing start");

   try
   {
#if SECURITY_CAMERA
      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Security Camera Image download start");

      using (Stream cameraStream = await _httpClient.GetStreamAsync(_applicationSettings.CameraUrl))
      using (Stream fileStream = File.Create(_applicationSettings.ImageInputFilenameLocal))
      {
         await cameraStream.CopyToAsync(fileStream);
      }

      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Security Camera Image download done");
#endif

      List<YoloPrediction> predictions;

      // Process the image on local file system
      using (Image<Rgba32> image = await Image.LoadAsync<Rgba32>(_applicationSettings.ImageInputFilenameLocal))
      {
         Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} YoloV5 inferencing start");
         predictions = _scorer.Predict(image);
         Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} YoloV5 inferencing done");

#if OUTPUT_IMAGE_MARKUP
         Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Image markup start");

         var font = new Font(new FontCollection().Add(_applicationSettings.ImageOutputMarkupFontPath), _applicationSettings.ImageOutputMarkupFontSize);

         foreach (var prediction in predictions) // iterate predictions to draw results
         {
            double score = Math.Round(prediction.Score, 2);

            var (x, y) = (prediction.Rectangle.Left - 3, prediction.Rectangle.Top - 23);

            image.Mutate(a => a.DrawPolygon(Pens.Solid(prediction.Label.Color, 1),
                  new PointF(prediction.Rectangle.Left, prediction.Rectangle.Top),
                  new PointF(prediction.Rectangle.Right, prediction.Rectangle.Top),
                  new PointF(prediction.Rectangle.Right, prediction.Rectangle.Bottom),
                  new PointF(prediction.Rectangle.Left, prediction.Rectangle.Bottom)
            ));

            image.Mutate(a => a.DrawText($"{prediction.Label.Name} ({score})",
                  font, prediction.Label.Color, new PointF(x, y)));
         }

         await image.SaveAsJpegAsync(_applicationSettings.ImageOutputFilenameLocal);

         Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Image markup done");
#endif
      }


#if PREDICTION_CLASSES
      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Image classes start");
      foreach (var prediction in predictions)
      {
         Console.WriteLine($"  Name:{prediction.Label.Name} Score:{prediction.Score:f2} Valid:{prediction.Score > _applicationSettings.PredictionScoreThreshold}");
      }
      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Image classes done");
#endif

#if PREDICTION_CLASSES_OF_INTEREST
      IEnumerable<string> predictionsOfInterest = predictions.Where(p => p.Score > _applicationSettings.PredictionScoreThreshold).Select(c => c.Label.Name).Intersect(_applicationSettings.PredictionLabelsOfInterest, StringComparer.OrdinalIgnoreCase);

      if (predictionsOfInterest.Any())
      {
         Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss} Camera image comtains {String.Join(",", predictionsOfInterest)}");
      }

#endif
   }
   catch (Exception ex)
   {
      Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss} Camera image download, upload or post procesing failed {ex.Message}");
   }
   finally
   {
      _cameraBusy = false;
   }

   TimeSpan duration = DateTime.UtcNow - requestAtUtc;

   Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss} Image processing done {duration.TotalSeconds:f2} sec");
   Console.WriteLine();
}

The names of the input image, output image and yoloV5 model file are configured in the appsettings.json (on device) or secrets.json (Visual Studio 2022 desktop) file. The location (ImageOutputMarkupFontPath) and size (ImageOutputMarkupFontSize) of the font used are configurable to make it easier run the application on different devices and operating systems.

{
   "ApplicationSettings": {
      "ImageTimerDue": "0.00:00:15",
      "ImageTimerPeriod": "0.00:00:30",

      "CameraUrl": "HTTP://10.0.0.56:85/images/snapshot.jpg",
      "CameraUserName": "",
      "CameraUserPassword": "",

      "ImageInputFilenameLocal": "InputLatest.jpg",
      "ImageOutputFilenameLocal": "OutputLatest.jpg",

      "ImageOutputMarkupFontPath": "C:/Windows/Fonts/consola.ttf",
      "ImageOutputMarkupFontSize": 16,

      "YoloV5ModelPath": "YoloV5/yolov5s.onnx",

      "PredictionScoreThreshold": 0.5,

      "PredictionLabelsOfInterest": [
         "bicycle",
         "person",
         "bench"
      ]
   }
}

The test-rig consisted of a Unv ADZK-10 Security Camera, Power over Ethernet(PoE) module and my development desktop PC.

My bicycle and “mother in laws” car in backyard
YoloV5ObjectDetectionCamera running on my desktop PC

Once the YoloV5s model was loaded, inferencing was taking roughly 0.47 seconds.

Marked up image of my bicycle and “mother in laws” car in backyard

Summary

Again, I was “standing on the shoulders of giants” the TechWings code just worked. With a pretrained yoloV5 model, the ML.Net Open Neural Network Exchange(ONNX) plumbing it took a couple of hours to update the application. Most of this time was learning about the Six Labors ImageSharp library to mark up the images.

Smartish Edge Camera – Azure IoT Updateable Properties (not persisted)

This post builds on my Smartish Edge Camera -Azure IoT Direct Methods post adding two updateable properties for the image capture and processing timer the due and period values. The two properties can be updated together or independently but the values are not persisted.

When I was searching for answers I found this code in many posts and articles but it didn’t really cover my scenario.

private static async Task OnDesiredPropertyChanged(TwinCollection desiredProperties, 
  object userContext)
{
   Console.WriteLine("desired property chPleange:");
   Console.WriteLine(JsonConvert.SerializeObject(desiredProperties));
   Console.WriteLine("Sending current time as reported property");
   TwinCollection reportedProperties = new TwinCollection
   {
       ["DateTimeLastDesiredPropertyChangeReceived"] = DateTime.Now
   };

    await Client.UpdateReportedPropertiesAsync(reportedProperties).ConfigureAwait(false);
}

When AZURE_DEVICE_PROPERTIES is defined in the SmartEdgeCameraAzureIoTService project properties the device reports a number of properties on startup and SetDesiredPropertyUpdateCallbackAsync is used to configure the method called whenever the client receives a state update(desired or reported) from the Azure IoT Hub.

protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
	_logger.LogInformation("Azure IoT Smart Edge Camera Service starting");

	try
	{
#if AZURE_IOT_HUB_CONNECTION
		_deviceClient = await AzureIoTHubConnection();
#endif
#if AZURE_IOT_HUB_DPS_CONNECTION
		_deviceClient = await AzureIoTHubDpsConnection();
#endif

#if AZURE_DEVICE_PROPERTIES
		_logger.LogTrace("ReportedPropeties upload start");

		TwinCollection reportedProperties = new TwinCollection();

		reportedProperties["OSVersion"] = Environment.OSVersion.VersionString;
		reportedProperties["MachineName"] = Environment.MachineName;
		reportedProperties["ApplicationVersion"] = Assembly.GetAssembly(typeof(Program)).GetName().Version;
		reportedProperties["ImageTimerDue"] = _applicationSettings.ImageTimerDue;
		reportedProperties["ImageTimerPeriod"] = _applicationSettings.ImageTimerPeriod;
		reportedProperties["YoloV5ModelPath"] = _applicationSettings.YoloV5ModelPath;

		reportedProperties["PredictionScoreThreshold"] = _applicationSettings.PredictionScoreThreshold;
		reportedProperties["PredictionLabelsOfInterest"] = _applicationSettings.PredictionLabelsOfInterest;
		reportedProperties["PredictionLabelsMinimum"] = _applicationSettings.PredictionLabelsMinimum;

		await _deviceClient.UpdateReportedPropertiesAsync(reportedProperties, stoppingToken);

		_logger.LogTrace("ReportedPropeties upload done");
#endif

		_logger.LogTrace("YoloV5 model setup start");
		_scorer = new YoloScorer<YoloCocoP5Model>(_applicationSettings.YoloV5ModelPath);
		_logger.LogTrace("YoloV5 model setup done");

		_ImageUpdatetimer = new Timer(ImageUpdateTimerCallback, null, _applicationSettings.ImageTimerDue, _applicationSettings.ImageTimerPeriod);

		await _deviceClient.SetMethodHandlerAsync("ImageTimerStart", ImageTimerStartHandler, null);
		await _deviceClient.SetMethodHandlerAsync("ImageTimerStop", ImageTimerStopHandler, null);
		await _deviceClient.SetMethodDefaultHandlerAsync(DefaultHandler, null);

		await _deviceClient.SetDesiredPropertyUpdateCallbackAsync(OnDesiredPropertyChangedAsync, null);

		try
		{
			await Task.Delay(Timeout.Infinite, stoppingToken);
		}
		catch (TaskCanceledException)
		{
			_logger.LogInformation("Application shutown requested");
		}
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, "Application startup failure");
	}
	finally
	{
		_deviceClient?.Dispose();
	}

	_logger.LogInformation("Azure IoT Smart Edge Camera Service shutdown");
}

// Lots of other code here

private async Task OnDesiredPropertyChangedAsync(TwinCollection desiredProperties, object userContext)
{
	TwinCollection reportedProperties = new TwinCollection();

	_logger.LogInformation("OnDesiredPropertyChanged handler");

	// NB- This approach does not save the ImageTimerDue or ImageTimerPeriod, a stop/start with return to appsettings.json configuration values. If only
	// one parameter specified other is default from appsettings.json. If timer settings changed I think they won't take
	// effect until next time Timer fires.

	try
	{
		// Check to see if either of ImageTimerDue or ImageTimerPeriod has changed
		if (!desiredProperties.Contains("ImageTimerDue") && !desiredProperties.Contains("ImageTimerPeriod"))
		{
			_logger.LogInformation("OnDesiredPropertyChanged neither ImageTimerDue or ImageTimerPeriod present");
			return;
		}

		TimeSpan imageTimerDue = _applicationSettings.ImageTimerDue;

		// Check that format of ImageTimerDue valid if present
		if (desiredProperties.Contains("ImageTimerDue"))
		{
			if (TimeSpan.TryParse(desiredProperties["ImageTimerDue"].Value, out imageTimerDue))
			{
				reportedProperties["ImageTimerDue"] = imageTimerDue;
			}
			else
			{
				_logger.LogInformation("OnDesiredPropertyChanged ImageTimerDue invalid");
				return;
			}
		}

		TimeSpan imageTimerPeriod = _applicationSettings.ImageTimerPeriod;

		// Check that format of ImageTimerPeriod valid if present
		if (desiredProperties.Contains("ImageTimerPeriod"))
		{
			if (TimeSpan.TryParse(desiredProperties["ImageTimerPeriod"].Value, out imageTimerPeriod))
			{
				reportedProperties["ImageTimerPeriod"] = imageTimerPeriod;
			}
			else
			{
				_logger.LogInformation("OnDesiredPropertyChanged ImageTimerPeriod invalid");
				return;
			}
		}

		_logger.LogInformation("Desired Due:{0} Period:{1}", imageTimerDue, imageTimerPeriod);

		if (!_ImageUpdatetimer.Change(imageTimerDue, imageTimerPeriod))
		{
			_logger.LogInformation("Desired Due:{0} Period:{1} failed", imageTimerDue, imageTimerPeriod);
		}

		await _deviceClient.UpdateReportedPropertiesAsync(reportedProperties);
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, "OnDesiredPropertyChangedAsync handler failed");
	}
}

The TwinCollection desiredProperties is checked for ImageTimerDue and ImageTimerPeriod properties and if either of these are present and valid the Timer.Change method is called.

The AzureMLMetSmartEdgeCamera supports both Azure IoT Hub and Azure IoT Central so I have included images from Azure IoT Explorer and my Azure IoT Central Templates.

SmartEdge Camera Device Twin properties in Azure IoT Explorer

When I modified, then saved the Azure IoT Hub Device Twin desired properties JavaScript Object Notation(JSON) in Azure IoT Hub Explorer the method configured with SetDesiredPropertyUpdateCallbackAsync was invoked on the device.

In Azure IoT Central I added two Capabilities to the device template, the time properties ImageTimerDue, and ImageTimerPeriod.

Azure IoT Central SmartEdgeCamera Device template capabilities

I added a View to the template so the two properties could be changed (I didn’t configure either as required)

Azure IoT Central SmartEdgeCamera Device Default view designer

In the “Device Properties”, “Operation Tab” when I changed the ImageTimerDue and/or ImageTimerPeriod there was visual feedback that there was an update in progress.

Azure IoT Central SmartEdgeCamera Device Properties update start

Then on the device the SmartEdgeCameraAzureIoTService the method configured with SetDesiredPropertyUpdateCallbackAsync was invoked on the device.

SmartEdge Camera Console application displaying updated properties

Once the properties have been updated on the device the UpdateReportedPropertiesAsync method is called

Then a message with the updated property values from the device was visible in the telemetry

Azure IoT Central SmartEdgeCamera Device Properties update done

Then finally the “Operation Tab” displayed a visual confirmation that the value(s) had been updated.

Smartish Edge Camera – Azure IoT Direct Methods

This post builds on my Smartish Edge Camera – Azure IoT Image-Upload post adding two Direct Methods for Starting and Stopping the image capture and processing timer. The AzureMLMetSmartEdgeCamera supports both Azure IoT Hub and Azure IoT Central connectivity.

Azure IoT Explorer invoking a Direct Method

BEWARE – The Direct Method names are case sensitive which regularly trips me up when I use Azure IoT Explorer. If the Direct Method name is unknown a default handler is called, the issue logged and a Hyper Text Transfer Protocol(HTTP) Not Implemented(501) error returned

protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
	_logger.LogInformation("Azure IoT Smart Edge Camera Service starting");

	try
	{
#if AZURE_IOT_HUB_CONNECTION
		_deviceClient = await AzureIoTHubConnection();
#endif
#if AZURE_IOT_HUB_DPS_CONNECTION
		_deviceClient = await AzureIoTHubDpsConnection();
#endif

...
		_logger.LogTrace("YoloV5 model setup start");
		_scorer = new YoloScorer<YoloCocoP5Model>(_applicationSettings.YoloV5ModelPath);
		_logger.LogTrace("YoloV5 model setup done");

		_ImageUpdatetimer = new Timer(ImageUpdateTimerCallback, null, _applicationSettings.ImageTimerDue, _applicationSettings.ImageTimerPeriod);

		await _deviceClient.SetMethodHandlerAsync("ImageTimerStart", ImageTimerStartHandler, null);
		await _deviceClient.SetMethodHandlerAsync("ImageTimerStop", ImageTimerStopHandler, null);
		await _deviceClient.SetMethodDefaultHandlerAsync(DefaultHandler, null);
...
		try
		{
			await Task.Delay(Timeout.Infinite, stoppingToken);
		}
		catch (TaskCanceledException)
		{
			_logger.LogInformation("Application shutown requested");
		}
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, "Application startup failure");
	}
	finally
	{
		_deviceClient?.Dispose();
	}

	_logger.LogInformation("Azure IoT Smart Edge Camera Service shutdown");
}

private async Task<MethodResponse> ImageTimerStartHandler(MethodRequest methodRequest, object userContext)
{
	_logger.LogInformation("ImageUpdatetimer Start Due:{0} Period:{1}", _applicationSettings.ImageTimerDue, _applicationSettings.ImageTimerPeriod);

	_ImageUpdatetimer.Change(_applicationSettings.ImageTimerDue, _applicationSettings.ImageTimerPeriod);

	return new MethodResponse((short)HttpStatusCode.OK);
}

private async Task<MethodResponse> ImageTimerStopHandler(MethodRequest methodRequest, object userContext)
{
	_logger.LogInformation("ImageUpdatetimer Stop");

	_ImageUpdatetimer.Change(Timeout.Infinite, Timeout.Infinite);

	return new MethodResponse((short)HttpStatusCode.OK);
}

private async Task<MethodResponse> DefaultHandler(MethodRequest methodRequest, object userContext)
{
	_logger.LogInformation("Direct Method default handler Name:{0}", methodRequest.Name);

	return new MethodResponse((short)HttpStatusCode.NotFound);
}

I created an Azure IoT Central Template with two command capabilities. (For more detail see my post TTI V3 Connector Azure IoT Central Cloud to Device(C2D)).

Azure IoT Central Template Direct Method configuration
Azure IoT Central Template Direct Method invocation
Azure Smart Edge Camera console application Start Direct Method call

Initially, I had one long post which covered Direct Methods, Readonly Properties and Updateable Properties but it got too long so I split it into three.

Smartish Edge Camera – Azure IoT Image Upload

This post builds on my Smartish Edge Camera – Azure Storage Service, Azure IoT Hub, and Azure IoT Central projects adding optional camera and marked-up image upload to Azure Blob Storage for Azure IoT Hubs and Azure IoT Central.

Azure IoT Hub – File upload storage account configuration
Azure IoT Central – File upload storage account configuration

The “new improved” process of uploading files to an Azure IoT Hub and Azure IoT Central is surprisingly complex to use and make robust(I think the initial approach with DeviceClient.UploadToBlobAsync which is now “deprecated” was easier to use).

public async Task UploadImage(List<YoloPrediction> predictions, string filepath, string blobpath)
{
	var fileUploadSasUriRequest = new FileUploadSasUriRequest()
	{
		BlobName = blobpath 
	};

	FileUploadSasUriResponse sasUri = await _deviceClient.GetFileUploadSasUriAsync(fileUploadSasUriRequest);

	var blockBlobClient = new BlockBlobClient(sasUri.GetBlobUri());

	var fileUploadCompletionNotification = new FileUploadCompletionNotification()
	{
		// Mandatory. Must be the same value as the correlation id returned in the sas uri response
		CorrelationId = sasUri.CorrelationId,

		IsSuccess = true
	};

	try
	{
		using (FileStream fileStream = File.OpenRead(filepath))
		{
			Response<BlobContentInfo> response = await blockBlobClient.UploadAsync(fileStream); //, blobUploadOptions);

			fileUploadCompletionNotification.StatusCode = response.GetRawResponse().Status;

			if (fileUploadCompletionNotification.StatusCode != ((int)HttpStatusCode.Created))
			{
				fileUploadCompletionNotification.IsSuccess = false;

				fileUploadCompletionNotification.StatusDescription = response.GetRawResponse().ReasonPhrase;
			}
		}
	}
	catch (RequestFailedException ex)
	{
		fileUploadCompletionNotification.StatusCode = ex.Status;

		fileUploadCompletionNotification.IsSuccess = false;

		fileUploadCompletionNotification.StatusDescription = ex.Message;
	}
	finally
	{
		await _deviceClient.CompleteFileUploadAsync(fileUploadCompletionNotification);
	}
}

If there is an object with a label in the PredictionLabelsOfInterest list, the camera and marked-up images can (configured with ImageCameraUpload & ImageMarkedupUpload) be uploaded to an Azure Storage Blob container associated with an Azure IoT Hub/ Azure IoT Central instance.

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },

  "Application": {
    "DeviceID": "",
    "ImageTimerDue": "0.00:00:15",
    "ImageTimerPeriod": "0.00:00:30",

    "ImageCameraFilepath": "ImageCamera.jpg",
    "ImageMarkedUpFilepath": "ImageMarkedup.jpg",

    "ImageCameraUpload": false,
    "ImageMarkedupUpload": true,

    "ImageUploadFilepath": "ImageMarkedup.jpg",

    "YoloV5ModelPath": "YoloV5/yolov5s.onnx",

    "PredictionScoreThreshold": 0.7,
    "PredictionLabelsOfInterest": [
      "bicycle",
      "person"
    ],

    "PredictionLabelsMinimum": [
      "bicycle",
      "car",
      "person"
    ],

    "ImageCameraFilenameFormat": "{0:yyyyMMdd}/{0:HHmmss}.jpg"
  },

  "SecurityCamera": {
    "CameraUrl": "",
    "CameraUserName": "",
    "CameraUserPassword": ""
  },

  "RaspberryPICamera": {
    "ProcessWaitForExit": 1000,
    "Rotation": 180
  },

  "AzureIoTHub": {
    "ConnectionString": ""
  },

  "AzureIoTHubDPS": {
    "GlobalDeviceEndpoint": "global.azure-devices-provisioning.net",
    "IDScope": "",
    "GroupEnrollmentKey": ""
  },

  "AzureStorage": {
    "ImageCameraFilenameFormat": "{0:yyyyMMdd}/camera/{0:HHmmss}.jpg",
    "ImageMarkedUpFilenameFormat": "{0:yyyyMMdd}/markedup/{0:HHmmss}.jpg"
  }
}

The Blob’s path is prefixed with the device id (My Azure Storage Service created an Azure Blob Storage container for each device).

Azure IoT Central SmartEdge Camera devices

The format of the Azure Storage Blob path is configurable(ImageCameraFilenameFormat & ImageMarkedUpFilenameFormat + Universal Coordinated Time(UTC)) so images can be grouped.

Configurable Blob paths in Azure Storage Explorer

After creating a new Azure IoT Hub uploads started failing with an exception and there weren’t a lot of useful search results (April 2022). I found error this was caused by missing or incorrect Azure Storage Account configuration.

Azure IoT Hub Upload application failure logging
{"Message":"{\"errorCode\":400022,\"trackingId\":\"1175af36ec884cc4a54978f77b877a01-G:0-TimeStamp:04/12/2022 10:19:04\",\"message\":\"BadRequest\",\"timestampUtc\":\"2022-04-12T10:19:04.5925999Z\"}","ExceptionMessage":""}

   at Microsoft.Azure.Devices.Client.Transport.HttpClientHelper.<ExecuteAsync>d__23.MoveNext()
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.Azure.Devices.Client.Transport.HttpClientHelper.<PostAsync>d__19`2.MoveNext()
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
   at Microsoft.Azure.Devices.Client.Transport.HttpTransportHandler.<GetFileUploadSasUriAsync>d__15.MoveNext()
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
   at devMobile.IoT.MachineLearning.SmartEdgeCameraAzureIoTService.Worker.<UploadImage>d__14.MoveNext() in C:\Users\BrynLewis\source\repos\AzureMLNetSmartEdgeCamera\SmartEdgeCameraAzureIoTService\Worker.cs:line 430
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
   at devMobile.IoT.MachineLearning.SmartEdgeCameraAzureIoTService.Worker.<ImageUpdateTimerCallback>d__10.MoveNext() in C:\Users\BrynLewis\source\repos\AzureMLNetSmartEdgeCamera\SmartEdgeCameraAzureIoTService\Worker.cs:line 268

While testing the application I noticed an “unexpected” object detected in my backyard…

Unexpected object detection diagnostic logging
Unexpected object detection results marked-up image

The mentalstack/yolov5-net and NuGet have been incredibly useful and MentalStack team have done a marvelous job building and supporting this project. For this project my test-rig consisted of a Unv ADZK-10 Security Camera, Power over Ethernet(PoE) and my HP Prodesk 400G4 DM (i7-8700T).

Smartish Edge Camera – Azure IoT Central

This post builds on Smartish Edge Camera – Azure Hub Part 1 using the Azure IoT Hub Device Provisioning Service(DPS) to connect to Azure IoT Central.

The list of object classes is in the YoloCocoP5Model.cs file in the mentalstack/yolov5-net repository.

public override List<YoloLabel> Labels { get; set; } = new List<YoloLabel>()
{
    new YoloLabel { Id = 1, Name = "person" },
    new YoloLabel { Id = 2, Name = "bicycle" },
    new YoloLabel { Id = 3, Name = "car" },
    new YoloLabel { Id = 4, Name = "motorcycle" },
    new YoloLabel { Id = 5, Name = "airplane" },
    new YoloLabel { Id = 6, Name = "bus" },
    new YoloLabel { Id = 7, Name = "train" },
    new YoloLabel { Id = 8, Name = "truck" },
    new YoloLabel { Id = 9, Name = "boat" },
    new YoloLabel { Id = 10, Name = "traffic light" },
    new YoloLabel { Id = 11, Name = "fire hydrant" },
    new YoloLabel { Id = 12, Name = "stop sign" },
    new YoloLabel { Id = 13, Name = "parking meter" },
    new YoloLabel { Id = 14, Name = "bench" },
    new YoloLabel { Id = 15, Name = "bird" },
    new YoloLabel { Id = 16, Name = "cat" },
    new YoloLabel { Id = 17, Name = "dog" },
    new YoloLabel { Id = 18, Name = "horse" },
    new YoloLabel { Id = 19, Name = "sheep" },
    new YoloLabel { Id = 20, Name = "cow" },
    new YoloLabel { Id = 21, Name = "elephant" },
    new YoloLabel { Id = 22, Name = "bear" },
    new YoloLabel { Id = 23, Name = "zebra" },
    new YoloLabel { Id = 24, Name = "giraffe" },
    new YoloLabel { Id = 25, Name = "backpack" },
    new YoloLabel { Id = 26, Name = "umbrella" },
    new YoloLabel { Id = 27, Name = "handbag" },
    new YoloLabel { Id = 28, Name = "tie" },
    new YoloLabel { Id = 29, Name = "suitcase" },
    new YoloLabel { Id = 30, Name = "frisbee" },
    new YoloLabel { Id = 31, Name = "skis" },
    new YoloLabel { Id = 32, Name = "snowboard" },
    new YoloLabel { Id = 33, Name = "sports ball" },
    new YoloLabel { Id = 34, Name = "kite" },
    new YoloLabel { Id = 35, Name = "baseball bat" },
    new YoloLabel { Id = 36, Name = "baseball glove" },
    new YoloLabel { Id = 37, Name = "skateboard" },
    new YoloLabel { Id = 38, Name = "surfboard" },
    new YoloLabel { Id = 39, Name = "tennis racket" },
    new YoloLabel { Id = 40, Name = "bottle" },
    new YoloLabel { Id = 41, Name = "wine glass" },
    new YoloLabel { Id = 42, Name = "cup" },
    new YoloLabel { Id = 43, Name = "fork" },
    new YoloLabel { Id = 44, Name = "knife" },
    new YoloLabel { Id = 45, Name = "spoon" },
    new YoloLabel { Id = 46, Name = "bowl" },
    new YoloLabel { Id = 47, Name = "banana" },
    new YoloLabel { Id = 48, Name = "apple" },
    new YoloLabel { Id = 49, Name = "sandwich" },
    new YoloLabel { Id = 50, Name = "orange" },
    new YoloLabel { Id = 51, Name = "broccoli" },
    new YoloLabel { Id = 52, Name = "carrot" },
    new YoloLabel { Id = 53, Name = "hot dog" },
    new YoloLabel { Id = 54, Name = "pizza" },
    new YoloLabel { Id = 55, Name = "donut" },
    new YoloLabel { Id = 56, Name = "cake" },
    new YoloLabel { Id = 57, Name = "chair" },
    new YoloLabel { Id = 58, Name = "couch" },
    new YoloLabel { Id = 59, Name = "potted plant" },
    new YoloLabel { Id = 60, Name = "bed" },
    new YoloLabel { Id = 61, Name = "dining table" },
    new YoloLabel { Id = 62, Name = "toilet" },
    new YoloLabel { Id = 63, Name = "tv" },
    new YoloLabel { Id = 64, Name = "laptop" },
    new YoloLabel { Id = 65, Name = "mouse" },
    new YoloLabel { Id = 66, Name = "remote" },
    new YoloLabel { Id = 67, Name = "keyboard" },
    new YoloLabel { Id = 68, Name = "cell phone" },
    new YoloLabel { Id = 69, Name = "microwave" },
    new YoloLabel { Id = 70, Name = "oven" },
    new YoloLabel { Id = 71, Name = "toaster" },
    new YoloLabel { Id = 72, Name = "sink" },
    new YoloLabel { Id = 73, Name = "refrigerator" },
    new YoloLabel { Id = 74, Name = "book" },
    new YoloLabel { Id = 75, Name = "clock" },
    new YoloLabel { Id = 76, Name = "vase" },
    new YoloLabel { Id = 77, Name = "scissors" },
    new YoloLabel { Id = 78, Name = "teddy bear" },
    new YoloLabel { Id = 79, Name = "hair drier" },
    new YoloLabel { Id = 80, Name = "toothbrush" }
};

Some of the label choices seem a bit arbitrary(frisbee, surfboard) and American(fire hydrant, baseball bat, baseball glove) It was quite tedious configuring the 80 labels in my Azure IoT Central template.

Azure IoT Central Template with all the YoloV5 labels configured

If there is an object with a label in the PredictionLabelsOfInterest list, a tally of each of the different object classes in the image is sent to an Azure IoT Hub/ Azure IoT Central.

"Application": {
  "DeviceID": "",
  "ImageTimerDue": "0.00:00:15",
  "ImageTimerPeriod": "0.00:00:30",

  "ImageCameraFilepath": "ImageCamera.jpg",
  "ImageMarkedUpFilepath": "ImageMarkedup.jpg",

  "YoloV5ModelPath": "YoloV5/yolov5s.onnx",

  "PredictionScoreThreshold": 0.7,
  "PredictionLabelsOfInterest": [
    "bicycle",
    "person"
  ],
  "PredictionLabelsMinimum": [
    "bicycle",
    "car",
    "person"
  ]
}
My backyard just after the car left (the dry patch in shingle on the right)
Smartish Edge Camera Service console just after car left
Smartish Edge Camera Azure IoT Central graphs showing missing data points

After the You Only Look Once(YOLOV5)+ML.Net+Open Neural Network Exchange(ONNX) plumbing has loaded a timer with a configurable due time and period is started.

private async void ImageUpdateTimerCallback(object state)
{
	DateTime requestAtUtc = DateTime.UtcNow;

	// Just incase - stop code being called while photo already in progress
	if (_cameraBusy)
	{
		return;
	}
	_cameraBusy = true;

	_logger.LogInformation("Image processing start");

	try
	{
#if CAMERA_RASPBERRY_PI
		RaspberryPIImageCapture();
#endif
#if CAMERA_SECURITY
		SecurityCameraImageCapture();
#endif
		List<YoloPrediction> predictions;

		using (Image image = Image.FromFile(_applicationSettings.ImageCameraFilepath))
		{
			_logger.LogTrace("Prediction start");
			predictions = _scorer.Predict(image);
			_logger.LogTrace("Prediction done");

			OutputImageMarkup(image, predictions, _applicationSettings.ImageMarkedUpFilepath);
		}

		if (_logger.IsEnabled(LogLevel.Trace))
		{
			_logger.LogTrace("Predictions {0}", predictions.Select(p => new { p.Label.Name, p.Score }));
		}

		var predictionsValid = predictions.Where(p => p.Score >= _applicationSettings.PredictionScoreThreshold).Select(p => p.Label.Name);

		// Count up the number of each class detected in the image
		var predictionsTally = predictionsValid.GroupBy(p => p)
				.Select(p => new
				{
					Label = p.Key,
					Count = p.Count()
				});

		if (_logger.IsEnabled(LogLevel.Information))
		{
			_logger.LogInformation("Predictions tally before {0}", predictionsTally.ToList());
		}

		// Add in any missing counts the cloudy side is expecting
		if (_applicationSettings.PredictionLabelsMinimum != null)
		{
			foreach( String label in _applicationSettings.PredictionLabelsMinimum)
			{
				if (!predictionsTally.Any(c=>c.Label == label ))
				{
					predictionsTally = predictionsTally.Append(new {Label = label, Count = 0 });
				}
			}
		}

		if (_logger.IsEnabled(LogLevel.Information))
		{
			_logger.LogInformation("Predictions tally after {0}", predictionsTally.ToList());
		}

		if ((_applicationSettings.PredictionLabelsOfInterest == null) || (predictionsValid.Select(c => c).Intersect(_applicationSettings.PredictionLabelsOfInterest, StringComparer.OrdinalIgnoreCase).Any()))
		{
			JObject telemetryDataPoint = new JObject();

			foreach (var predictionTally in predictionsTally)
			{
				telemetryDataPoint.Add(predictionTally.Label, predictionTally.Count);
			}

			using (Message message = new Message(Encoding.ASCII.GetBytes(JsonConvert.SerializeObject(telemetryDataPoint))))
			{
				message.Properties.Add("iothub-creation-time-utc", requestAtUtc.ToString("s", CultureInfo.InvariantCulture));

				await _deviceClient.SendEventAsync(message);
			}
		}
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, "Camera image download, post processing, or telemetry failed");
	}
	finally
	{
		_cameraBusy = false;
	}

	TimeSpan duration = DateTime.UtcNow - requestAtUtc;

	_logger.LogInformation("Image processing done {0:f2} sec", duration.TotalSeconds);
}

Using some Language Integrated Query (LINQ) code any predictions with a score < PredictionScoreThreshold are discarded. A count of the instances of each class is generated with some more LINQ code.

The PredictionLabelsMinimum(optional) is then used to add additional labels with a count of 0 to PredictionsTally so there are no missing datapoints. This is specifically for Azure IoT Central Dashboard so the graph lines are continuous.

Smartish Edge Camera Service console just after put bike in-front of the garage

If any of the list of valid predictions labels is in the PredictionLabelsOfInterest list (if the PredictionLabelsOfInterest is empty any label is a label of interest) the list of prediction class counts is used to populate a Newtonsoft JObject which is serialised to generate a Java Script Object Notation(JSON) Azure IoT Hub message payload.

The “automagic” graph scaling can be sub-optimal

The mentalstack/yolov5-net and NuGet have been incredibly useful and MentalStack team have done a marvelous job building and supporting this project.

The test-rig consisted of a Unv ADZK-10 Security Camera, Power over Ethernet(PoE) and my HP Prodesk 400G4 DM (i7-8700T).

Smartish Edge Camera – Azure IoT Hub

The SmartEdgeCameraAzureIoTService application uses the same You Only Look Once(YOLOV5) + ML.Net + Open Neural Network Exchange(ONNX) plumbing as the SmartEdgeCameraAzureStorageService.

If there is an object with a label in the PredictionLabelsOfInterest list, a tally of each of the different object classes is sent to an Azure IoT Hub.

"Application": {
  "DeviceID": "",
  "ImageTimerDue": "0.00:00:15",
  "ImageTimerPeriod": "0.00:00:30",

  "ImageCameraFilepath": "ImageCamera.jpg",

  "YoloV5ModelPath": "YoloV5/yolov5s.onnx",

  "PredicitionScoreThreshold": 0.7,
  "PredictionLabelsOfInterest": [
    "person"
  ],
}

The Azure IoT hub can configured via a Shared Access Signature(SAS) device policy connection string or the Azure IoT Hub Device Provisioning Service(DPS)

Cars and bicycles in my backyard with no object(s) of interest
SmartEdgeCameraAzureIoTService no object(s) of interest
Cars and bicycles in my backyard with one object of interest
SmartEdgeCameraAzureIoTService one object of interest
Azure IoT Explorer Telemetry with one object of interest

After the You Only Look Once(YOLOV5)+ML.Net+Open Neural Network Exchange(ONNX) plumbing has loaded a timer with a configurable due time and period is started. Using some Language Integrated Query (LINQ) code any predictions with a score < PredictionScoreThreshold are discarded, then the list of predictions is checked to see if there are any in the PredictionLabelsOfInterest. If there are any matching predictions a count of the instances of each class is generated with more LINQ code.

private async void ImageUpdateTimerCallback(object state)
{
	DateTime requestAtUtc = DateTime.UtcNow;

	// Just incase - stop code being called while photo already in progress
	if (_cameraBusy)
	{
		return;
	}
	_cameraBusy = true;

	_logger.LogInformation("Image processing start");

	try
	{
#if CAMERA_RASPBERRY_PI
		RaspberryPIImageCapture();
#endif
#if CAMERA_SECURITY
		SecurityCameraImageCapture();
#endif
		List<YoloPrediction> predictions;

		using (Image image = Image.FromFile(_applicationSettings.ImageCameraFilepath))
		{
			_logger.LogTrace("Prediction start");
			predictions = _scorer.Predict(image);
			_logger.LogTrace("Prediction done");
		}

		if (_logger.IsEnabled(LogLevel.Trace))
		{
			_logger.LogTrace("Predictions {0}", predictions.Select(p => new { p.Label.Name, p.Score }));
		}

		var predictionsOfInterest = predictions.Where(p => p.Score > _applicationSettings.PredicitionScoreThreshold)
										.Select(c => c.Label.Name)
										.Intersect(_applicationSettings.PredictionLabelsOfInterest, StringComparer.OrdinalIgnoreCase);

		if (predictionsOfInterest.Any())
		{
			if (_logger.IsEnabled(LogLevel.Trace))
			{
				_logger.LogTrace("Predictions of interest {0}", predictionsOfInterest.ToList());
			}

			var predictionsTally = predictions.GroupBy(p => p.Label.Name)
									.Select(p => new
									{
										Label = p.Key,
										Count = p.Count()
									});

			if (_logger.IsEnabled(LogLevel.Information))
			{
				_logger.LogInformation("Predictions tally {0}", predictionsTally.ToList());
			}

			JObject telemetryDataPoint = new JObject();

			foreach (var predictionTally in predictionsTally)
			{
				telemetryDataPoint.Add(predictionTally.Label, predictionTally.Count);
			}

			using (Message message = new Message(Encoding.ASCII.GetBytes(JsonConvert.SerializeObject(telemetryDataPoint))))
			{
				message.Properties.Add("iothub-creation-time-utc", requestAtUtc.ToString("s", CultureInfo.InvariantCulture));

				await _deviceClient.SendEventAsync(message);
			}
		}
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, "Camera image download, post processing, telemetry failed");
	}
	finally
	{
		_cameraBusy = false;
	}

	TimeSpan duration = DateTime.UtcNow - requestAtUtc;

	_logger.LogInformation("Image processing done {0:f2} sec", duration.TotalSeconds);
}

The list of prediction class counts is used to populate a Newtonsoft JObject which serialised to generate a Java Script Object Notation(JSON) payload for an Azure IoT Hub message.

The test-rig consisted of a Unv ADZK-10 Security Camera, Power over Ethernet(PoE) and my HP Prodesk 400G4 DM (i7-8700T)

Smartish Edge Camera – Azure Storage Service

The AzureIoTSmartEdgeCameraService was a useful proof of concept(PoC) but the codebase was starting to get unwieldy so it has been split into the SmartEdgeCameraAzureStorageService and SmartEdgeCameraAzureIoTService.

The initial ML.Net +You only look once V5(YoloV5) project uploaded raw (effectively a time lapse camera) and marked-up (with searchable tags) images to Azure Storage. But, after using it in a “real” project I found…

  • The time-lapse functionality which continually uploaded images wasn’t that useful. I have another standalone application which has that functionality.
  • If an object with a label in the “PredictionLabelsOfInterest” and a score greater than PredicitionScoreThreshold was detected it was useful to have the option to upload the camera and/or marked-up (including objects below the threshold) image(s).
  • Having both camera and marked-up images tagged so they were searchable with an application like Azure Storage Explorer was very useful.
Security Camera Image
Security Camera image with bounding boxes around all detected objects
Azure Storage Explorer filter for images containing 1 person

After the You Only Look Once(YOLOV5)+ML.Net+Open Neural Network Exchange(ONNX) plumbing has loaded a timer with a configurable due time and period was started.

private async void ImageUpdateTimerCallback(object state)
{
	DateTime requestAtUtc = DateTime.UtcNow;

	// Just incase - stop code being called while photo already in progress
	if (_cameraBusy)
	{
		return;
	}
	_cameraBusy = true;

	_logger.LogInformation("Image processing start");

	try
	{
#if CAMERA_RASPBERRY_PI
		RaspberryPIImageCapture();
#endif
#if CAMERA_SECURITY
		SecurityCameraImageCapture();
#endif
		List<YoloPrediction> predictions;

		using (Image image = Image.FromFile(_applicationSettings.ImageCameraFilepath))
		{
			_logger.LogTrace("Prediction start");
			predictions = _scorer.Predict(image);
			_logger.LogTrace("Prediction done");

			OutputImageMarkup(image, predictions, _applicationSettings.ImageMarkedUpFilepath);
		}

		if (_logger.IsEnabled(LogLevel.Trace))
		{
			_logger.LogTrace("Predictions {0}", predictions.Select(p => new { p.Label.Name, p.Score }));
		}

		var predictionsOfInterest = predictions.Where(p => p.Score > _applicationSettings.PredicitionScoreThreshold).Select(c => c.Label.Name).Intersect(_applicationSettings.PredictionLabelsOfInterest, StringComparer.OrdinalIgnoreCase);
		if (_logger.IsEnabled(LogLevel.Trace))
		{
			_logger.LogTrace("Predictions of interest {0}", predictionsOfInterest.ToList());
		}

		var predictionsTally = predictions.Where(p => p.Score >= _applicationSettings.PredicitionScoreThreshold)
									.GroupBy(p => p.Label.Name)
									.Select(p => new
									{
										Label = p.Key,
										Count = p.Count()
									});

		if (predictionsOfInterest.Any())
		{
			BlobUploadOptions blobUploadOptions = new BlobUploadOptions()
			{
				Tags = new Dictionary<string, string>()
			};

			foreach (var predicition in predictionsTally)
			{
				blobUploadOptions.Tags.Add(predicition.Label, predicition.Count.ToString());
			}

			if (_applicationSettings.ImageCameraUpload)
			{
				_logger.LogTrace("Image camera upload start");

				string imageFilenameCloud = string.Format(_azureStorageSettings.ImageCameraFilenameFormat, requestAtUtc);

				await _imagecontainerClient.GetBlobClient(imageFilenameCloud).UploadAsync(_applicationSettings.ImageCameraFilepath, blobUploadOptions);

				_logger.LogTrace("Image camera upload done");
			}

			if (_applicationSettings.ImageMarkedupUpload)
			{
				_logger.LogTrace("Image marked-up upload start");

				string imageFilenameCloud = string.Format(_azureStorageSettings.ImageMarkedUpFilenameFormat, requestAtUtc);

				await _imagecontainerClient.GetBlobClient(imageFilenameCloud).UploadAsync(_applicationSettings.ImageMarkedUpFilepath, blobUploadOptions);

				_logger.LogTrace("Image marked-up upload done");
			}
		}

		if (_logger.IsEnabled(LogLevel.Information))
		{
			_logger.LogInformation("Predictions tally {0}", predictionsTally.ToList());
		}
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, "Camera image download, post procesing, image upload, or telemetry failed");
	}
	finally
	{
		_cameraBusy = false;
	}

	TimeSpan duration = DateTime.UtcNow - requestAtUtc;

	_logger.LogInformation("Image processing done {0:f2} sec", duration.TotalSeconds);
}

The test-rig consisted of a Unv ADZK-10 Security Camera, Power over Ethernet(PoE) module, D-Link Switch and a Raspberry Pi 4B 8G, or ASUS PE100A, or my HP Prodesk 400G4 DM (i7-8700T)

Security Camera Image download times

Excluding the first download it takes on average 0.16 secs to download a security camera image with my network setup.

Development PC image download and processing console

The HP Prodesk 400G4 DM (i7-8700T) took on average 1.16 seconds to download an image from the camera, run the model, and upload the two images to Azure Storage

Raspberry PI 4B image download and processing console

The Raspberry Pi 4B 8G took on average 2.18 seconds to download an image from the camera, run the model, then upload the two images to Azure Storage

ASUS PE100A image download an processing console

The ASUS PE100A took on average 3.79 seconds to download an image from the camera, run the model, then upload the two images to Azure Storage.

Smartish Edge Camera – Azure Storage Image Tags

This ML.Net +You only look once V5(YoloV5) + RaspberryPI 4B project uploads raw camera and marked up (with searchable tags) images to Azure Storage.

Raspberry PI 4 B backyard test rig

My backyard test-rig consists of a Unv ADZK-10 Security Camera, Power over Ethernet(PoE) module, D-Link Switch and a Raspberry Pi 4B 8G.

{
   ...

  "Application": {
    "DeviceId": "edgecamera",
...
    "PredicitionScoreThreshold": 0.7,
    "PredictionLabelsOfInterest": [
      "bicycle",
      "person",
      "car"
    ],
    "OutputImageMarkup": true
  },
...
  "AzureStorage": {
    "ConnectionString": "FhisIsNotTheConnectionStringYouAreLookingFor",
    "ImageCameraFilenameFormat": "{0:yyyyMMdd}/camera/{0:HHmmss}.jpg",
    "ImageMarkedUpFilenameFormat": "{0:yyyyMMdd}/markedup/{0:HHmmss}.jpg"
  }
}

After the You Only Look Once(YOLOV5)+ML.Net+Open Neural Network Exchange(ONNX) plumbing has loaded a timer with a configurable due time and period is started.

private async void ImageUpdateTimerCallback(object state)
{
	DateTime requestAtUtc = DateTime.UtcNow;

	// Just incase - stop code being called while photo already in progress
	if (_cameraBusy)
	{
		return;
	}
	_cameraBusy = true;

	_logger.LogInformation("Image processing start");

	try
	{
#if CAMERA_RASPBERRY_PI
		RaspberryPIImageCapture();
#endif
#if CAMERA_SECURITY
		SecurityCameraImageCapture();
#endif
		if (_applicationSettings.ImageCameraUpload)
		{
			_logger.LogTrace("Image camera upload start");

			string imageFilenameCloud = string.Format(_azureStorageSettings.ImageCameraFilenameFormat, requestAtUtc);

			await _imagecontainerClient.GetBlobClient(imageFilenameCloud).UploadAsync(_applicationSettings.ImageCameraFilepath, true);

			_logger.LogTrace("Image camera upload done");
		}

		List<YoloPrediction> predictions;

		using (Image image = Image.FromFile(_applicationSettings.ImageCameraFilepath))
		{
			_logger.LogTrace("Prediction start");
			predictions = _scorer.Predict(image);
			_logger.LogTrace("Prediction done");

			OutputImageMarkup(image, predictions, _applicationSettings.ImageMarkedUpFilepath);
		}

		if (_logger.IsEnabled(LogLevel.Trace))
		{
			_logger.LogTrace("Predictions {0}", predictions.Select(p => new { p.Label.Name, p.Score }));
		}

		var predictionsOfInterest = predictions.Where(p => p.Score > _applicationSettings.PredicitionScoreThreshold).Select(c => c.Label.Name).Intersect(_applicationSettings.PredictionLabelsOfInterest, StringComparer.OrdinalIgnoreCase);
		if (_logger.IsEnabled(LogLevel.Trace))
		{
			_logger.LogTrace("Predictions of interest {0}", predictionsOfInterest.ToList());
		}

		var predictionsTally = predictions.Where(p => p.Score >= _applicationSettings.PredicitionScoreThreshold)
									.GroupBy(p => p.Label.Name)
									.Select(p => new
									{
										Label = p.Key,
										Count = p.Count()
									});

		if (_applicationSettings.ImageMarkedupUpload && predictionsOfInterest.Any())
		{
			_logger.LogTrace("Image marked-up upload start");

			string imageFilenameCloud = string.Format(_azureStorageSettings.ImageMarkedUpFilenameFormat, requestAtUtc);

			BlobUploadOptions blobUploadOptions = new BlobUploadOptions()
			{
				Tags = new Dictionary<string, string>()
			};

			foreach (var predicition in predictionsTally)
			{
				blobUploadOptions.Tags.Add(predicition.Label, predicition.Count.ToString());
			}

			BlobClient blobClient = _imagecontainerClient.GetBlobClient(imageFilenameCloud);

			await blobClient.UploadAsync(_applicationSettings.ImageMarkedUpFilepath, blobUploadOptions);

			_logger.LogTrace("Image marked-up upload done");
		}

		if (_logger.IsEnabled(LogLevel.Information))
		{
			_logger.LogInformation("Predictions tally {0}", predictionsTally.ToList());
		}
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, "Camera image download, post procesing, image upload, or telemetry failed");
	}
	finally
	{
		_cameraBusy = false;
	}

	TimeSpan duration = DateTime.UtcNow - requestAtUtc;

	_logger.LogInformation("Image processing done {0:f2} sec", duration.TotalSeconds);
}
RaspberryPI 4B console application output

A marked up image is uploaded to Azure Storage if any of the objects detected (with a score greater than PredicitionScoreThreshold) is in the PredictionLabelsOfInterest list.

Single bicycle
Two bicycles
Three bicycles
Three bicycles with person in the foreground
Two bicycles with a person and dog in the foreground

I have added Tags to the images so they can be filtered with tools like Azure Storage Explorer.

All the camera images
All the marked up images with more than one bicycle
All the marked up images with more than two bicycles
All the marked up images with people and bicycles