YoloV8 ONNX – Nvidia Jetson Orin Nano™ Execution Providers

The Seeedstudio reComputer J3011 has two processors an ARM64 CPU and an Nvidia Jetson Orin 8G which can be used for inferencing with the Open Neural Network Exchange(ONNX)Runtime.

Story of Fail

Inferencing worked first time on the ARM64 CPU because the required runtime is included in the Microsoft.ML.OnnxRuntime NuGet

ARM64 Linux ONNX runtime
Microsoft.ML.OnnxRuntime NuGet ARM64 Linux runtime

Inferencing failed on the Nividia Jetson Orin 8G because the CUDA Execution provider and TensorRT Execution Provider for the ONNXRuntime were not included in the Microsoft.ML.OnnxRuntime.GPU.Linux NuGet.

Missing ARM64 Linux GPU runtime

There were Linux x64 and Windows x64 versions of the ONNXRuntime library included in the Microsoft.ML.OnnxRuntime.Gpu NuGet

Microsoft.ML.OnnxRuntime.Gpu NuGet x64 Linux runtime

Desperately Seeking libonnxruntime.so

The Nvidia ONNX runtime site had pip wheel files for the different versions of Python and the Open Neural Network Exchange(ONNX)Runtime.

The onnxruntime_gpu-1.18.0-cp312-cp312-linux_aarch64.whl matched the version of the ONNXRuntime I needed and version of Python on the device..

When the pip wheel file was renamed onnxruntime_gpu-1.18.0-cp312-cp312-linux_aarch64.zip it could be opened, but there wasn’t a libonnruntime.so.

Onnxruntime_gpu-1.18.0-cp312-cp312-linux_aarch64 file listing

Building the TensorRT & CUDA Execution Providers

The ONNXRuntime build has to be done on Nividia Jetson Orin so after installing all the necessary prerequisites the first attempt failed.

bryn@ubuntu:~/onnxruntime/onnxruntime$ ./build.sh --config Release --update --build --build_wheel \
--use_tensorrt --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu \
--tensorrt_home /usr/lib/aarch64-linux-gnu

When in high power mode more cores are used but this consumes more resource when building the ONNXRuntime. To limit resource utilisation --parallel2 was added the command line because the compile process was having “out of memory” failures.

bryn@ubuntu:~/onnxruntime/onnxruntime$ ./build.sh --config Release --update --build --parallel 2 --build_wheel \
--use_tensorrt --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu \
--tensorrt_home /usr/lib/aarch64-linux-gnu

There were some compiler warnings but they appear to be benign.

First attempt at running the application failed because libonnxruntime.so was missing so –build_shared_lib was added to the command line

2024-06-10 18:21:58,480 build [INFO] - Build complete
bryn@ubuntu:~/onnxruntime/onnxruntime$ ./build.sh --config Release --update --build --parallel 2 --build_wheel --use_tensorrt --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu --tensorrt_home /usr/lib/aarch64-linux-gnu --build_shared_lib

When the build completed the files were copied to the runtime folder of the program.

The application could then be configured to use the TensorRT Execution Provider.

Getting CUDA and TensorRT working on the Nvidia Jetson Orin 8G took much longer than I expected, with many dead ends and device factory resets before the process was repeatable.

YoloV8 ONNX – Nvidia Jetson Orin Nano™ CPU & GPU TensorRT Inferencing

The Seeedstudio reComputer J3011 has two processors an ARM64 CPU and an Nividia Jetson Orin 8G. To speed up TensorRT inferencing I built an Open Neural Network Exchange(ONNX) TensorRT Execution Provider. After updating the code to add a “warm-up” and tracking of average pre-processing, inferencing & post-processing durations I did a series of CPU & GPU performance tests.

The testing consisted of permutations of three models TennisBallsYoloV8s20240618640×640.onnx, TennisBallsYoloV8s2024062410241024.onnx & TennisBallsYoloV8x20240614640×640 (limited testing as slow) and three images TennisBallsLandscape640x640.jpg, TennisBallsLandscape1024x1024.jpg & TennisBallsLandscape3072x4080.jpg.

Executive Summary

As expected, inferencing with a TensorRT 640×640 model and a 640×640 image was fastest, 9mSec pre-processing, 21mSec inferencing, then 4mSec post-processing.

If the image had to be scaled with SixLabors.ImageSharp this significantly increased the preprocessing (and overall) time.

CPU Inferencing

GPU TensorRT Small model Inferencing

GPU TensorRT Large model Inferencing

Nvidia Jetson Orin Nano™ JetPack 6

The Seeedstudio reComputer J3011 has two processors an ARM64 CPU and an Nividia Jetson Orin 8G Coprocessor. To speed up ML.NET running on the Nividia Jetson Orin 8G required compatible versions of ML.NET Open Neural Network Exchange (ONNX) and NVIDIA Jetpack.

Before installing NVIDIA Jetpack 6 the Seeedstudio reComputer J3011 Edge AI Device has to be put into recovery mode

Seeedstudio reComputer J3011 Edge AI Device with jumper for recovery mode

When started in recovery mode the Seeedstudio J3011 was in list of Universal Serial Bus (USB) devices returned by lsusb

Upgrading to Jetpack 5.1.1 so the device could be upgraded using the Windows subsystem for Linux terminal failed. The NVIDIA SDK Manager downloads and installs all the required components and dependencies.

Installing NVIDIA Jetpack 6 from the Windows subsystem for Linux failed because the version of Ubuntu installed(Ubuntu 24.02 LTS) was not supported by NVIDIA SDK Manager.

Installing NVIDIA Jetpack 6 from a desktop PC running Ubuntu 24.02 LTS failed (the same issue as above) because the NVIDIA SDK Manager did not support that version of Ubuntu. The desktop PC was then “re-paved” with Ubuntu 22.04 LTS and NVIDIA SDK Manager worked.

An NVIDIA Developer program login is required to launch the NVIDIA SDK Manager

Selecting the right target hardware is important if it is not “auto detected”.

The Open Neural Network Exchange(ONNX) supports the Compute Unified Device Architecture (CUDA) which has to be included in the installation package.

Downloading NVIDIA Jetpack 6 and all the selected components of the install can be quite slow

Installation of NVIDIA Jetpack 6 and selected components can take a while.

Even though Jetpack 6 is now available for Seeed’s Jetson Orin Devices this process is still applicable for an upgrade or “factory reset”.

YoloV8 ONNX – Nvidia Jetson Orin Nano™ DenseTensor Performance

When running the YoloV8 Coprocessor demonstration on the Nividia Jetson Orin inferencing looked a bit odd, the dotted line wasn’t moving as fast as expected. To investigate this further I split the inferencing duration into pre-processing, inferencing and post-processing times. Inferencing and post-processing were “quick”, but pre-processing was taking longer than expected.

YoloV8 Coprocessor application running on Nvidia Jetson Orin

When I ran the demonstration Ultralytics YoloV8 object detection console application on my development desktop (13th Gen Intel(R) Core(TM) i7-13700 2.10 GHz with 32.0 GB) the pre-processing was much faster.

The much shorter pre-processing and longer inferencing durations were not a surprise as my development desktop does not have a Graphics Processing Unit(GPU)

Test image used for testing on Jetson device and development PC

The test image taken with my mobile was 3606×2715 pixels which was representative of the security cameras images to be processed by the solution.

Redgate ANTS Performance Profiler instrumentation of application execution

On my development box running the application with Redgate ANTS Performance Profiler highlighted that the Computnet YoloV8 code converting the image to a DenseTensor could be an issue.

 public static void ProcessToTensor(Image<Rgb24> image, Size modelSize, bool originalAspectRatio, DenseTensor<float> target, int batch)
 {
    var options = new ResizeOptions()
    {
       Size = modelSize,
       Mode = originalAspectRatio ? ResizeMode.Max : ResizeMode.Stretch,
    };

    var xPadding = (modelSize.Width - image.Width) / 2;
    var yPadding = (modelSize.Height - image.Height) / 2;

    var width = image.Width;
    var height = image.Height;

    // Pre-calculate strides for performance
    var strideBatchR = target.Strides[0] * batch + target.Strides[1] * 0;
    var strideBatchG = target.Strides[0] * batch + target.Strides[1] * 1;
    var strideBatchB = target.Strides[0] * batch + target.Strides[1] * 2;
    var strideY = target.Strides[2];
    var strideX = target.Strides[3];

    // Get a span of the whole tensor for fast access
    var tensorSpan = target.Buffer;

    // Try get continuous memory block of the entire image data
    if (image.DangerousTryGetSinglePixelMemory(out var memory))
    {
       Parallel.For(0, width * height, index =>
       {
             int x = index % width;
             int y = index / width;
             int tensorIndex = strideBatchR + strideY * (y + yPadding) + strideX * (x + xPadding);

             var pixel = memory.Span[index];
             WritePixel(tensorSpan.Span, tensorIndex, pixel, strideBatchR, strideBatchG, strideBatchB);
       });
    }
    else
    {
       Parallel.For(0, height, y =>
       {
             var rowSpan = image.DangerousGetPixelRowMemory(y).Span;
             int tensorYIndex = strideBatchR + strideY * (y + yPadding);

             for (int x = 0; x < width; x++)
             {
                int tensorIndex = tensorYIndex + strideX * (x + xPadding);
                var pixel = rowSpan[x];
                WritePixel(tensorSpan.Span, tensorIndex, pixel, strideBatchR, strideBatchG, strideBatchB);
             }
       });
    }
 }

 private static void WritePixel(Span<float> tensorSpan, int tensorIndex, Rgb24 pixel, int strideBatchR, int strideBatchG, int strideBatchB)
 {
    tensorSpan[tensorIndex] = pixel.R / 255f;
    tensorSpan[tensorIndex + strideBatchG - strideBatchR] = pixel.G / 255f;
    tensorSpan[tensorIndex + strideBatchB - strideBatchR] = pixel.B / 255f;
 }

For a 3606×2715 image the WritePixel method would be called tens of millions of times so its implementation and the overall approach used for ProcessToTensor has a significant impact on performance.

YoloV8 Coprocessor application running on Nvidia Jetson Orin with a resized image

Resizing the images had a significant impact on performance on the development box and Nividia Jetson Orin. This will need some investigation to see how much reducing the resizing the images impacts on the performance and accuracy of the model.

The ProcessToTensor method has already had some performance optimisations which improved performance by roughly 20%. There have been discussions about optimising similar code e.g. Efficient Bitmap to OnnxRuntime Tensor in C#, and Efficient RGB Image to Tensor in dotnet which look applicable and these will be evaluated.

YoloV8 ONNX – Nvidia Jetson Orin Nano™ GPU TensorRT Inferencing

The Seeedstudio reComputer J3011 has two processors an ARM64 CPU and an Nividia Jetson Orin 8G. To speed up inferencing on the Nividia Jetson Orin 8G with TensorRT I built an Open Neural Network Exchange(ONNX) TensorRT Execution Provider.

Roboflow Universe Tennis Ball by Ugur ozdemir dataset

The Open Neural Network Exchange(ONNX) model used was trained on Roboflow Universe by Ugur ozdemir dataset which has 23696 images. The initial version of the TensorRT integration used the builder.UseTensorrt method of the IYoloV8Builder interface.

...
YoloV8Builder builder = new YoloV8Builder();

builder.UseOnnxModel(_applicationSettings.ModelPath);

if (_applicationSettings.UseTensorrt)
{
   Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} Using TensorRT");

   builder.UseTensorrt(_applicationSettings.DeviceId);
}
...

When the YoloV8.Coprocessor.Detect.Image application was configured to use the NVIDIA TensorRT Execution provider the average inference time was 58mSec but it took roughly 7 minutes to build and optimise the engine each time the application was run.

Generating the TensorRT engine every time the application is started

The TensorRT Execution provider has a number of configuration options but the IYoloV8Builder interface had to modified with UseCuda, UseRocm, UseTensorrt and UseTvm overloads implemented to allow additional configuration settings.

...
public class YoloV8Builder : IYoloV8Builder
{
...
    public IYoloV8Builder UseOnnxModel(BinarySelector model)
    {
        _model = model;

        return this;
    }

#if GPURELEASE
    public IYoloV8Builder UseCuda(int deviceId) => WithSessionOptions(SessionOptions.MakeSessionOptionWithCudaProvider(deviceId));

    public IYoloV8Builder UseCuda(OrtCUDAProviderOptions options) => WithSessionOptions(SessionOptions.MakeSessionOptionWithCudaProvider(options));

    public IYoloV8Builder UseRocm(int deviceId) => WithSessionOptions(SessionOptions.MakeSessionOptionWithRocmProvider(deviceId));
    
    // Couldn't test this don't have suitable hardware
    public IYoloV8Builder UseRocm(OrtROCMProviderOptions options) => WithSessionOptions(SessionOptions.MakeSessionOptionWithRocmProvider(options));

    public IYoloV8Builder UseTensorrt(int deviceId) => WithSessionOptions(SessionOptions.MakeSessionOptionWithTensorrtProvider(deviceId));

    public IYoloV8Builder UseTensorrt(OrtTensorRTProviderOptions options) => WithSessionOptions(SessionOptions.MakeSessionOptionWithTensorrtProvider(options));

    // Couldn't test this don't have suitable hardware
    public IYoloV8Builder UseTvm(string settings = "") => WithSessionOptions(SessionOptions.MakeSessionOptionWithTvmProvider(settings));
#endif
...
}

The trt_engine_cache_enable and trt_engine_cache_path TensorRT Execution provider session options configured the engine to be cached when it’s built for the first time so when a new inference session is created the engine can be loaded directly from disk.

...
YoloV8Builder builder = new YoloV8Builder();

builder.UseOnnxModel(_applicationSettings.ModelPath);

if (_applicationSettings.UseTensorrt)
{
   Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} Using TensorRT");

   OrtTensorRTProviderOptions tensorRToptions = new OrtTensorRTProviderOptions();

   Dictionary<string, string> optionKeyValuePairs = new Dictionary<string, string>();

   optionKeyValuePairs.Add("trt_engine_cache_enable", "1");
   optionKeyValuePairs.Add("trt_engine_cache_path", "enginecache/");

   tensorRToptions.UpdateOptions(optionKeyValuePairs);

   builder.UseTensorrt(tensorRToptions);
}
...

In order to validate that the loaded engine loaded from the trt_engine_cache_path is usable for the current inference, an engine profile is also cached and loaded along with engine

If current input shapes are in the range of the engine profile, the loaded engine can be safely used. If input shapes are out of range, the profile will be updated and the engine will be recreated based on the new profile.

Reusing the TensorRT engine built the first time the application is started

When the YoloV8.Coprocessor.Detect.Image application was configured to use NVIDIA TensorRT and the engine was cached the average inference time was 58mSec and the Build method took roughly 10sec to execute after the application had been run once.

trtexec console application output

The trtexec utility can “pre-generate” engines but there doesn’t appear a way to use them with the TensorRT Execution provider.

YoloV8 ONNX – Nvidia Jetson Orin Nano™ GPU CUDA Inferencing

The Seeedstudio reComputer J3011 has two processors an ARM64 CPU and an Nividia Jetson Orin 8G. To speed up inferencing with the Nividia Jetson Orin 8G with Compute Unified Device Architecture (CUDA) I built an Open Neural Network Exchange(ONNX) CUDA Execution Provider.

The Open Neural Network Exchange(ONNX) model used was trained on Roboflow Universe by Ugur ozdemir dataset which has 23696 images.

// load the app settings into configuration
var configuration = new ConfigurationBuilder()
      .AddJsonFile("appsettings.json", false, true)
.Build();

_applicationSettings = configuration.GetSection("ApplicationSettings").Get<Model.ApplicationSettings>();

Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model load: {_applicationSettings.ModelPath}");

YoloV8Builder builder = new YoloV8Builder();

builder.UseOnnxModel(_applicationSettings.ModelPath);

if (_applicationSettings.UseCuda)
{
   builder.UseCuda(_applicationSettings.DeviceId) ;
}

if (_applicationSettings.UseTensorrt)
{
   builder.UseTensorrt(_applicationSettings.DeviceId);
}

/*
builder.WithConfiguration(c =>
{
});
*/

/*
builder.WithSessionOptions(new Microsoft.ML.OnnxRuntime.SessionOptions()
{

});
*/

using (var image = await SixLabors.ImageSharp.Image.LoadAsync<Rgba32>(_applicationSettings.ImageInputPath))
using (var predictor = builder.Build())
{
   var result = await predictor.DetectAsync(image);

   Console.WriteLine();
   Console.WriteLine($"Speed: {result.Speed}");
   Console.WriteLine();

   foreach (var prediction in result.Boxes)
   {
      Console.WriteLine($" Class {prediction.Class} {(prediction.Confidence * 100.0):f1}% X:{prediction.Bounds.X} Y:{prediction.Bounds.Y} Width:{prediction.Bounds.Width} Height:{prediction.Bounds.Height}");
   }

   Console.WriteLine();

   Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} Plot and save : {_applicationSettings.ImageOutputPath}");

   using (var imageOutput = await result.PlotImageAsync(image))
   {
      await imageOutput.SaveAsJpegAsync(_applicationSettings.ImageOutputPath);
   }
}

When configured to run the YoloV8.Coprocessor.Detect.Image on the ARM64 CPU the average inference time was 729 mSec.

The first time ran the YoloV8.Coprocessor.Detect.Image application configured to use CUDA for inferencing it failed badly.

The YoloV8.Coprocessor.Detect.Image application was then configured to use CUDA and the average inferencing time was 85mSec.

It took a couple of weeks to get the YoloV8.Coprocessor.Detect.Image application inferencing on the Nividia Jetson Orin 8G coprocessor and this will be covered in detail in another posts.

YoloV8 ONNX – Nvidia Jetson Orin Nano™ ARM64 CPU Inferencing

I configured the demonstration Ultralytics YoloV8 object detection(yolov8s.onnx) console application to process a 1920×1080 image from a security camera on my desktop development box (13th Gen Intel(R) Core(TM) i7-13700 2.10 GHz with 32.0 GB)

Object Detection sample application running on my development box

A Seeedstudio reComputer J3011 uses a Nividia Jetson Orin 8G and looked like a cost-effective platform to explore how a dedicated Artificial Intelligence (AI) co-processor could reduce inferencing times.

To establish a “baseline” I “published” the demonstration application on my development box which created a folder with all the files required to run the application on the Seeedstudio reComputer J3011 ARM64 CPU. I had to manually merge the “User Secrets” and appsettings.json files so the camera connection configuration was correct.

The runtimes folder contained a number of folders with the native runtime files for the supported Open Neural Network Exchange(ONNX) platforms

Object Detection application publish runtimes folder

This Nividia Jetson Orin ARM64 CPU requires the linux-arm64 ONNX runtime which was “automagically” detected. (in previous versions of ML.Net the native runtime had to be copied to the execution directory)

Linux ONNX ARM64 runtime

The final step was to use the demonstration Ultralytics YoloV8 object detection(yolov8s.onnx) console application to process a 1920×1080 image from a security camera on the reComputer J3011 (6-core Arm® Cortex®64-bit CPU 1.5Ghz processor)

Object Detection sample application running on my Seeedstudio reComputer J3011

When I averaged the pre-processing, inferencing and post-processing times for both devices over 20 executions my development box was much faster which was not a surprise. Though the reComputer J3011 post processing times were a bit faster than I was expecting

ARM64 CPU Preprocess 0.05s Inference 0.31s Postprocess 0.05

ML.Net YoloV5 + Security Camera Revisited

This post is about “revisiting” my ML.Net YoloV5 + Camera on ARM64 Raspberry PI application, updating it to .NET 6, the latest version of the TechWings yolov5-net (library formerly from mentalstack) and the latest version of the ML.Net Open Neural Network Exchange(ONNX) libraries.

Visual Studio 2022 with updated NuGet packages

The updated TechWings yolov5-net library now uses Six Labors ImageSharp for markup rather than System.Drawing.Common. (I found System.Drawing.Common a massive Pain in the Arse (PiTA))

private static async void ImageUpdateTimerCallback(object state)
{
   DateTime requestAtUtc = DateTime.UtcNow;

   // Just incase - stop code being called while photo already in progress
   if (_cameraBusy)
   {
      return;
   }
   _cameraBusy = true;

   Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss} Image processing start");

   try
   {
#if SECURITY_CAMERA
      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Security Camera Image download start");

      using (Stream cameraStream = await _httpClient.GetStreamAsync(_applicationSettings.CameraUrl))
      using (Stream fileStream = File.Create(_applicationSettings.ImageInputFilenameLocal))
      {
         await cameraStream.CopyToAsync(fileStream);
      }

      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Security Camera Image download done");
#endif

      List<YoloPrediction> predictions;

      // Process the image on local file system
      using (Image<Rgba32> image = await Image.LoadAsync<Rgba32>(_applicationSettings.ImageInputFilenameLocal))
      {
         Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} YoloV5 inferencing start");
         predictions = _scorer.Predict(image);
         Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} YoloV5 inferencing done");

#if OUTPUT_IMAGE_MARKUP
         Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Image markup start");

         var font = new Font(new FontCollection().Add(_applicationSettings.ImageOutputMarkupFontPath), _applicationSettings.ImageOutputMarkupFontSize);

         foreach (var prediction in predictions) // iterate predictions to draw results
         {
            double score = Math.Round(prediction.Score, 2);

            var (x, y) = (prediction.Rectangle.Left - 3, prediction.Rectangle.Top - 23);

            image.Mutate(a => a.DrawPolygon(Pens.Solid(prediction.Label.Color, 1),
                  new PointF(prediction.Rectangle.Left, prediction.Rectangle.Top),
                  new PointF(prediction.Rectangle.Right, prediction.Rectangle.Top),
                  new PointF(prediction.Rectangle.Right, prediction.Rectangle.Bottom),
                  new PointF(prediction.Rectangle.Left, prediction.Rectangle.Bottom)
            ));

            image.Mutate(a => a.DrawText($"{prediction.Label.Name} ({score})",
                  font, prediction.Label.Color, new PointF(x, y)));
         }

         await image.SaveAsJpegAsync(_applicationSettings.ImageOutputFilenameLocal);

         Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Image markup done");
#endif
      }


#if PREDICTION_CLASSES
      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Image classes start");
      foreach (var prediction in predictions)
      {
         Console.WriteLine($"  Name:{prediction.Label.Name} Score:{prediction.Score:f2} Valid:{prediction.Score > _applicationSettings.PredictionScoreThreshold}");
      }
      Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} Image classes done");
#endif

#if PREDICTION_CLASSES_OF_INTEREST
      IEnumerable<string> predictionsOfInterest = predictions.Where(p => p.Score > _applicationSettings.PredictionScoreThreshold).Select(c => c.Label.Name).Intersect(_applicationSettings.PredictionLabelsOfInterest, StringComparer.OrdinalIgnoreCase);

      if (predictionsOfInterest.Any())
      {
         Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss} Camera image comtains {String.Join(",", predictionsOfInterest)}");
      }

#endif
   }
   catch (Exception ex)
   {
      Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss} Camera image download, upload or post procesing failed {ex.Message}");
   }
   finally
   {
      _cameraBusy = false;
   }

   TimeSpan duration = DateTime.UtcNow - requestAtUtc;

   Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss} Image processing done {duration.TotalSeconds:f2} sec");
   Console.WriteLine();
}

The names of the input image, output image and yoloV5 model file are configured in the appsettings.json (on device) or secrets.json (Visual Studio 2022 desktop) file. The location (ImageOutputMarkupFontPath) and size (ImageOutputMarkupFontSize) of the font used are configurable to make it easier run the application on different devices and operating systems.

{
   "ApplicationSettings": {
      "ImageTimerDue": "0.00:00:15",
      "ImageTimerPeriod": "0.00:00:30",

      "CameraUrl": "HTTP://10.0.0.56:85/images/snapshot.jpg",
      "CameraUserName": "",
      "CameraUserPassword": "",

      "ImageInputFilenameLocal": "InputLatest.jpg",
      "ImageOutputFilenameLocal": "OutputLatest.jpg",

      "ImageOutputMarkupFontPath": "C:/Windows/Fonts/consola.ttf",
      "ImageOutputMarkupFontSize": 16,

      "YoloV5ModelPath": "YoloV5/yolov5s.onnx",

      "PredictionScoreThreshold": 0.5,

      "PredictionLabelsOfInterest": [
         "bicycle",
         "person",
         "bench"
      ]
   }
}

The test-rig consisted of a Unv ADZK-10 Security Camera, Power over Ethernet(PoE) module and my development desktop PC.

My bicycle and “mother in laws” car in backyard
YoloV5ObjectDetectionCamera running on my desktop PC

Once the YoloV5s model was loaded, inferencing was taking roughly 0.47 seconds.

Marked up image of my bicycle and “mother in laws” car in backyard

Summary

Again, I was “standing on the shoulders of giants” the TechWings code just worked. With a pretrained yoloV5 model, the ML.Net Open Neural Network Exchange(ONNX) plumbing it took a couple of hours to update the application. Most of this time was learning about the Six Labors ImageSharp library to mark up the images.

Smartish Edge Camera – Azure IoT Updateable Properties (not persisted)

This post builds on my Smartish Edge Camera -Azure IoT Direct Methods post adding two updateable properties for the image capture and processing timer the due and period values. The two properties can be updated together or independently but the values are not persisted.

When I was searching for answers I found this code in many posts and articles but it didn’t really cover my scenario.

private static async Task OnDesiredPropertyChanged(TwinCollection desiredProperties, 
  object userContext)
{
   Console.WriteLine("desired property chPleange:");
   Console.WriteLine(JsonConvert.SerializeObject(desiredProperties));
   Console.WriteLine("Sending current time as reported property");
   TwinCollection reportedProperties = new TwinCollection
   {
       ["DateTimeLastDesiredPropertyChangeReceived"] = DateTime.Now
   };

    await Client.UpdateReportedPropertiesAsync(reportedProperties).ConfigureAwait(false);
}

When AZURE_DEVICE_PROPERTIES is defined in the SmartEdgeCameraAzureIoTService project properties the device reports a number of properties on startup and SetDesiredPropertyUpdateCallbackAsync is used to configure the method called whenever the client receives a state update(desired or reported) from the Azure IoT Hub.

protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
	_logger.LogInformation("Azure IoT Smart Edge Camera Service starting");

	try
	{
#if AZURE_IOT_HUB_CONNECTION
		_deviceClient = await AzureIoTHubConnection();
#endif
#if AZURE_IOT_HUB_DPS_CONNECTION
		_deviceClient = await AzureIoTHubDpsConnection();
#endif

#if AZURE_DEVICE_PROPERTIES
		_logger.LogTrace("ReportedPropeties upload start");

		TwinCollection reportedProperties = new TwinCollection();

		reportedProperties["OSVersion"] = Environment.OSVersion.VersionString;
		reportedProperties["MachineName"] = Environment.MachineName;
		reportedProperties["ApplicationVersion"] = Assembly.GetAssembly(typeof(Program)).GetName().Version;
		reportedProperties["ImageTimerDue"] = _applicationSettings.ImageTimerDue;
		reportedProperties["ImageTimerPeriod"] = _applicationSettings.ImageTimerPeriod;
		reportedProperties["YoloV5ModelPath"] = _applicationSettings.YoloV5ModelPath;

		reportedProperties["PredictionScoreThreshold"] = _applicationSettings.PredictionScoreThreshold;
		reportedProperties["PredictionLabelsOfInterest"] = _applicationSettings.PredictionLabelsOfInterest;
		reportedProperties["PredictionLabelsMinimum"] = _applicationSettings.PredictionLabelsMinimum;

		await _deviceClient.UpdateReportedPropertiesAsync(reportedProperties, stoppingToken);

		_logger.LogTrace("ReportedPropeties upload done");
#endif

		_logger.LogTrace("YoloV5 model setup start");
		_scorer = new YoloScorer<YoloCocoP5Model>(_applicationSettings.YoloV5ModelPath);
		_logger.LogTrace("YoloV5 model setup done");

		_ImageUpdatetimer = new Timer(ImageUpdateTimerCallback, null, _applicationSettings.ImageTimerDue, _applicationSettings.ImageTimerPeriod);

		await _deviceClient.SetMethodHandlerAsync("ImageTimerStart", ImageTimerStartHandler, null);
		await _deviceClient.SetMethodHandlerAsync("ImageTimerStop", ImageTimerStopHandler, null);
		await _deviceClient.SetMethodDefaultHandlerAsync(DefaultHandler, null);

		await _deviceClient.SetDesiredPropertyUpdateCallbackAsync(OnDesiredPropertyChangedAsync, null);

		try
		{
			await Task.Delay(Timeout.Infinite, stoppingToken);
		}
		catch (TaskCanceledException)
		{
			_logger.LogInformation("Application shutown requested");
		}
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, "Application startup failure");
	}
	finally
	{
		_deviceClient?.Dispose();
	}

	_logger.LogInformation("Azure IoT Smart Edge Camera Service shutdown");
}

// Lots of other code here

private async Task OnDesiredPropertyChangedAsync(TwinCollection desiredProperties, object userContext)
{
	TwinCollection reportedProperties = new TwinCollection();

	_logger.LogInformation("OnDesiredPropertyChanged handler");

	// NB- This approach does not save the ImageTimerDue or ImageTimerPeriod, a stop/start with return to appsettings.json configuration values. If only
	// one parameter specified other is default from appsettings.json. If timer settings changed I think they won't take
	// effect until next time Timer fires.

	try
	{
		// Check to see if either of ImageTimerDue or ImageTimerPeriod has changed
		if (!desiredProperties.Contains("ImageTimerDue") && !desiredProperties.Contains("ImageTimerPeriod"))
		{
			_logger.LogInformation("OnDesiredPropertyChanged neither ImageTimerDue or ImageTimerPeriod present");
			return;
		}

		TimeSpan imageTimerDue = _applicationSettings.ImageTimerDue;

		// Check that format of ImageTimerDue valid if present
		if (desiredProperties.Contains("ImageTimerDue"))
		{
			if (TimeSpan.TryParse(desiredProperties["ImageTimerDue"].Value, out imageTimerDue))
			{
				reportedProperties["ImageTimerDue"] = imageTimerDue;
			}
			else
			{
				_logger.LogInformation("OnDesiredPropertyChanged ImageTimerDue invalid");
				return;
			}
		}

		TimeSpan imageTimerPeriod = _applicationSettings.ImageTimerPeriod;

		// Check that format of ImageTimerPeriod valid if present
		if (desiredProperties.Contains("ImageTimerPeriod"))
		{
			if (TimeSpan.TryParse(desiredProperties["ImageTimerPeriod"].Value, out imageTimerPeriod))
			{
				reportedProperties["ImageTimerPeriod"] = imageTimerPeriod;
			}
			else
			{
				_logger.LogInformation("OnDesiredPropertyChanged ImageTimerPeriod invalid");
				return;
			}
		}

		_logger.LogInformation("Desired Due:{0} Period:{1}", imageTimerDue, imageTimerPeriod);

		if (!_ImageUpdatetimer.Change(imageTimerDue, imageTimerPeriod))
		{
			_logger.LogInformation("Desired Due:{0} Period:{1} failed", imageTimerDue, imageTimerPeriod);
		}

		await _deviceClient.UpdateReportedPropertiesAsync(reportedProperties);
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, "OnDesiredPropertyChangedAsync handler failed");
	}
}

The TwinCollection desiredProperties is checked for ImageTimerDue and ImageTimerPeriod properties and if either of these are present and valid the Timer.Change method is called.

The AzureMLMetSmartEdgeCamera supports both Azure IoT Hub and Azure IoT Central so I have included images from Azure IoT Explorer and my Azure IoT Central Templates.

SmartEdge Camera Device Twin properties in Azure IoT Explorer

When I modified, then saved the Azure IoT Hub Device Twin desired properties JavaScript Object Notation(JSON) in Azure IoT Hub Explorer the method configured with SetDesiredPropertyUpdateCallbackAsync was invoked on the device.

In Azure IoT Central I added two Capabilities to the device template, the time properties ImageTimerDue, and ImageTimerPeriod.

Azure IoT Central SmartEdgeCamera Device template capabilities

I added a View to the template so the two properties could be changed (I didn’t configure either as required)

Azure IoT Central SmartEdgeCamera Device Default view designer

In the “Device Properties”, “Operation Tab” when I changed the ImageTimerDue and/or ImageTimerPeriod there was visual feedback that there was an update in progress.

Azure IoT Central SmartEdgeCamera Device Properties update start

Then on the device the SmartEdgeCameraAzureIoTService the method configured with SetDesiredPropertyUpdateCallbackAsync was invoked on the device.

SmartEdge Camera Console application displaying updated properties

Once the properties have been updated on the device the UpdateReportedPropertiesAsync method is called

Then a message with the updated property values from the device was visible in the telemetry

Azure IoT Central SmartEdgeCamera Device Properties update done

Then finally the “Operation Tab” displayed a visual confirmation that the value(s) had been updated.

Smartish Edge Camera – Azure IoT Direct Methods

This post builds on my Smartish Edge Camera – Azure IoT Image-Upload post adding two Direct Methods for Starting and Stopping the image capture and processing timer. The AzureMLMetSmartEdgeCamera supports both Azure IoT Hub and Azure IoT Central connectivity.

Azure IoT Explorer invoking a Direct Method

BEWARE – The Direct Method names are case sensitive which regularly trips me up when I use Azure IoT Explorer. If the Direct Method name is unknown a default handler is called, the issue logged and a Hyper Text Transfer Protocol(HTTP) Not Implemented(501) error returned

protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
	_logger.LogInformation("Azure IoT Smart Edge Camera Service starting");

	try
	{
#if AZURE_IOT_HUB_CONNECTION
		_deviceClient = await AzureIoTHubConnection();
#endif
#if AZURE_IOT_HUB_DPS_CONNECTION
		_deviceClient = await AzureIoTHubDpsConnection();
#endif

...
		_logger.LogTrace("YoloV5 model setup start");
		_scorer = new YoloScorer<YoloCocoP5Model>(_applicationSettings.YoloV5ModelPath);
		_logger.LogTrace("YoloV5 model setup done");

		_ImageUpdatetimer = new Timer(ImageUpdateTimerCallback, null, _applicationSettings.ImageTimerDue, _applicationSettings.ImageTimerPeriod);

		await _deviceClient.SetMethodHandlerAsync("ImageTimerStart", ImageTimerStartHandler, null);
		await _deviceClient.SetMethodHandlerAsync("ImageTimerStop", ImageTimerStopHandler, null);
		await _deviceClient.SetMethodDefaultHandlerAsync(DefaultHandler, null);
...
		try
		{
			await Task.Delay(Timeout.Infinite, stoppingToken);
		}
		catch (TaskCanceledException)
		{
			_logger.LogInformation("Application shutown requested");
		}
	}
	catch (Exception ex)
	{
		_logger.LogError(ex, "Application startup failure");
	}
	finally
	{
		_deviceClient?.Dispose();
	}

	_logger.LogInformation("Azure IoT Smart Edge Camera Service shutdown");
}

private async Task<MethodResponse> ImageTimerStartHandler(MethodRequest methodRequest, object userContext)
{
	_logger.LogInformation("ImageUpdatetimer Start Due:{0} Period:{1}", _applicationSettings.ImageTimerDue, _applicationSettings.ImageTimerPeriod);

	_ImageUpdatetimer.Change(_applicationSettings.ImageTimerDue, _applicationSettings.ImageTimerPeriod);

	return new MethodResponse((short)HttpStatusCode.OK);
}

private async Task<MethodResponse> ImageTimerStopHandler(MethodRequest methodRequest, object userContext)
{
	_logger.LogInformation("ImageUpdatetimer Stop");

	_ImageUpdatetimer.Change(Timeout.Infinite, Timeout.Infinite);

	return new MethodResponse((short)HttpStatusCode.OK);
}

private async Task<MethodResponse> DefaultHandler(MethodRequest methodRequest, object userContext)
{
	_logger.LogInformation("Direct Method default handler Name:{0}", methodRequest.Name);

	return new MethodResponse((short)HttpStatusCode.NotFound);
}

I created an Azure IoT Central Template with two command capabilities. (For more detail see my post TTI V3 Connector Azure IoT Central Cloud to Device(C2D)).

Azure IoT Central Template Direct Method configuration
Azure IoT Central Template Direct Method invocation
Azure Smart Edge Camera console application Start Direct Method call

Initially, I had one long post which covered Direct Methods, Readonly Properties and Updateable Properties but it got too long so I split it into three.