Building Edge AI with GitHub Copilot – Faster R-CNN Client

Introduction

For this post I’ll be using GitHub Copilot to generate code for a console application that uses a Faster R-CNN ONNX model to process an image loaded from disk.

please write a C# console application that uses Onnx to run a ONNX Faster-RCNN object detection model on an image loaded from disk

I added the System.Drawing.Common and Microsoft.ML.OnnxRuntime NuGet packages, required using directives then the code compiled. The ProcessOutput method was stubbed out so I will need to figure out the format of the output tensor

static void Main(string[] args)
{
   string modelPath = "FasterRCNN-10.onnx";
   string imagePath = "sports.jpg";

   using var session = new InferenceSession(modelPath);
   using var image = new Bitmap(imagePath);

   var inputTensor = ExtractTensorFromImage(image);
   var inputs = new List<NamedOnnxValue>
   {
       NamedOnnxValue.CreateFromTensor("image", inputTensor)
   };

   using IDisposableReadOnlyCollection<DisposableNamedOnnxValue> results = session.Run(inputs);
   var output = results.First().AsEnumerable<float>().ToArray();

   // Process the output (e.g., draw bounding boxes on the image)
   ProcessOutput(output, image);

   image.Save("output.jpg", ImageFormat.Jpeg);
   Console.WriteLine("Object detection completed. Output saved as output.jpg");
}

private static DenseTensor<float> ExtractTensorFromImage(Bitmap image)
{
   int width = image.Width;
   int height = image.Height;
   var tensor = new DenseTensor<float>(new[] { 1, 3, height, width });

   for (int y = 0; y < height; y++)
   {
      for (int x = 0; x < width; x++)
      {
         Color pixel = image.GetPixel(x, y);
         tensor[0, 0, y, x] = pixel.R / 255.0f;
         tensor[0, 1, y, x] = pixel.G / 255.0f;
         tensor[0, 2, y, x] = pixel.B / 255.0f;
      }
   }

   return tensor;
}

private static void ProcessOutput(float[] output, Bitmap image)
{
   // Implement the logic to process the output and draw bounding boxes on the image
   // This is a placeholder for the actual implementation
}

I fixed the paths to the ONNX model and image files, so the application ran but there was an error.

The dimensions of the input tensor were wrong because the declaration included a batch id.

Change dimensions of tensor by removing batch from declaration

private static DenseTensor<float> ExtractTensorFromImage(Bitmap image)
{
   int width = image.Width;
   int height = image.Height;
   var tensor = new DenseTensor<float>(new[] { 3, height, width });

   for (int y = 0; y < height; y++)
   {
      for (int x = 0; x < width; x++)
      {
         Color pixel = image.GetPixel(x, y);
         tensor[0, y, x] = pixel.R / 255.0f;
         tensor[1, y, x] = pixel.G / 255.0f;
         tensor[2, y, x] = pixel.B / 255.0f;
      }
   }

   return tensor;
 }

GitHub Copilot fixed the tensor declaration by removing the batch id. The application also used System.Drawing which is not supported on my target embedded platforms.

change from System.Drawing to ImageSharp

I added the SixLabors.ImageSharp and removed System.Drawing.Common NuGets. Then updated the using directives as required.

using var image = Image.Load<Rgb24>(imagePath);
...
private static DenseTensor<float> ExtractTensorFromImage(Image<Rgb24> image)
{
   int width = image.Width;
   int height = image.Height;
   var tensor = new DenseTensor<float>(new[] { 3, height, width });

   image.ProcessPixelRows(accessor =>
   {
      for (int y = 0; y < height; y++)
      {
         var pixelRow = accessor.GetRowSpan(y);
         for (int x = 0; x < width; x++)
         {
            tensor[0, y, x] = pixelRow[x].R / 255.0f;
            tensor[1, y, x] = pixelRow[x].G / 255.0f;
            tensor[2, y, x] = pixelRow[x].B / 255.0f;
         }
      }
   });

   return tensor;
}

The application ran but there was an error as the dimensions of the input image were wrong.

Based on the Faster R-CNN documentation I used ImageSharp to resize the image

Use ImageSharp to resize the image such that both height and width are within the range of [800, 1333], such that both height and width are divisible by 32.

 private static void ResizeImage(Image<Rgb24> image)
 {
    const int minSize = 800;
    const int maxSize = 1333;

    int originalWidth = image.Width;
    int originalHeight = image.Height;

    float scale = Math.Min((float)maxSize / Math.Max(originalWidth, originalHeight), (float)minSize / Math.Min(originalWidth, originalHeight));
    int newWidth = (int)(originalWidth * scale);
    int newHeight = (int)(originalHeight * scale);

    // Ensure dimensions are divisible by 32
    newWidth = (newWidth / 32) * 32;
    newHeight = (newHeight / 32) * 32;

    image.Mutate(x => x.Resize(newWidth, newHeight));
 }

The application ran but the results were bad, so I checked format of the input tensor and figured out the R,G,B values need to be adjusted.

Apply mean to each channel

private static DenseTensor<float> ExtractTensorFromImage(Image<Rgb24> image)
{
   int width = image.Width;
   int height = image.Height;
   var tensor = new DenseTensor<float>(new[] { 3, height, width });

   // Mean values for each channel
   float[] mean = { 0.485f, 0.456f, 0.406f };

   image.ProcessPixelRows(accessor =>
   {
      for (int y = 0; y < height; y++)
      {
         var pixelRow = accessor.GetRowSpan(y);
         for (int x = 0; x < width; x++)
         {
            tensor[0, y, x] = (pixelRow[x].R / 255.0f) - mean[0];
            tensor[1, y, x] = (pixelRow[x].G / 255.0f) - mean[1];
            tensor[2, y, x] = (pixelRow[x].B / 255.0f) - mean[2];
         }
      }
   });

   return tensor;
 }

The application ran but the results were still bad, so I checked format of the input tensor and figured out the Resnet50 means had be used and the input tensor was BGR rather than RGB

Use FasterRCNN means not resnet ones
Change to B,G,R

private static DenseTensor<float> ExtractTensorFromImage(Image<Rgb24> image)
{
   int width = image.Width;
   int height = image.Height;
   var tensor = new DenseTensor<float>(new[] { 3, height, width });

   // Mean values for each channel for FasterRCNN
   float[] mean = { 102.9801f, 115.9465f, 122.7717f };

   image.ProcessPixelRows(accessor =>
   {
      for (int y = 0; y < height; y++)
      {
         var pixelRow = accessor.GetRowSpan(y);
         for (int x = 0; x < width; x++)
         {
            tensor[0, y, x] = pixelRow[x].B - mean[0];
            tensor[1, y, x] = pixelRow[x].G - mean[1];
            tensor[2, y, x] = pixelRow[x].R - mean[2];
         }
      }
   });

   return tensor;
}

When I inspected the values in the output tensor in the debugger they looked “reasonable” so got GitHub Copilot to add the code required to display the results.

Display label, confidence and bounding box

The application ran but there was an exception because the names of the output tensor “dimensions” were wrong.

I used Netron to get the correct output tensor “dimension” names.

I then manually fixed the output tensor “dimension” names

private static void ProcessOutput(IDisposableReadOnlyCollection<DisposableNamedOnnxValue> output)
{
   var boxes = output.First(x => x.Name == "6379").AsTensor<float>().ToArray();
   var labels = output.First(x => x.Name == "6381").AsTensor<long>().ToArray();
   var confidences = output.First(x => x.Name == "6383").AsTensor<float>().ToArray();

   const float minConfidence = 0.7f;

   for (int i = 0; i < boxes.Length; i += 4)
   {
      var index = i / 4;
      if (confidences[index] >= minConfidence)
      {
         long label = labels[index];
         float confidence = confidences[index];
         float x1 = boxes[i];
         float y1 = boxes[i + 1];
         float x2 = boxes[i + 2];
         float y2 = boxes[i + 3];

         Console.WriteLine($"Label: {label}, Confidence: {confidence}, Bounding Box: [{x1}, {y1}, {x2}, {y2}]");
      }
   }
}

I manually compared the output of the console application with equivalent YoloSharp application output and the results looked close enough.

Summary

The Copilot prompts required to generate code were significantly more complex than previous examples and I had to regularly refer to the documentation to figure out what was wrong. The code wasn’t great and Copilot didn’t add much value

The Copilot generated code in this post is not suitable for production

Building Edge AI with AI- YoloDotNet Client

Introduction

For this post I have used Copilot prompts to generate code which uses Ultralytics YoloV8 and YoloDoNet by NickSwardh for object detection, object classification, and pose estimation.

Object Detection

static void Main(string[] args)
{
   string modelPath = "path/to/yolo_model.onnx";  // Update with actual model path
   string imagePath = "path/to/image.jpg";        // Update with actual image path

   // Load YOLO model
   var yolo = new YoloModel(modelPath);

   // Load image
   using Bitmap image = new Bitmap(imagePath);

   // Run classification
   var results = yolo.Predict(image);

   // Display results
   foreach (var result in results)
   {
      Console.WriteLine($"Detected: {result.Label} with confidence {result.Confidence:P}");
   }
}

The code didn’t compile so I tried a different prompt

This didn’t compile and looked like it was based on https://github.com/techwingslab/yolov5-net. The generated code also added code to check that the model and image files existed.

static void Main(string[] args)
{
   string modelPath = "yolov8.onnx"; // Replace with your actual model path
   string imagePath = "image.jpg"; // Replace with your actual image path

   if (!File.Exists(modelPath))
   {
      Console.WriteLine("Error: Model file not found!");
      return;
   }

   if (!File.Exists(imagePath))
   {
      Console.WriteLine("Error: Image file not found!");
      return;
   }

   try
   {
      // Load the YOLO model
      using var yolo = new Yolo(modelPath);

      // Load image from disk
      using var image = new Bitmap(imagePath);

      // Run object detection
      var results = yolo.Predict(image);

      // Display detected objects
      foreach (var result in results)
      {
         Console.WriteLine($"Detected: {result.Label} - Confidence: {result.Confidence}");
         Console.WriteLine($"Bounding Box: {result.BoundingBox}");
      }
   }
   catch (Exception ex)
   {
      Console.WriteLine($"Error: {ex.Message}");
   }
}

This didn’t compile so tried again

static void Main(string[] args)
{
   string modelPath = "yolov8.onnx"; // Replace with your actual model path
   string imagePath = "image.jpg"; // Replace with your actual image path
...
   try
   {
      var yoloOptions = new YoloOptions
      {
         ConfidenceThreshold = 0.5, // Confidence threshold (adjust as needed)
         IoUThreshold = 0.4        // Intersection over Union threshold
      };

      // Load the YOLO model
      //using var yolo = new Yolo(modelPath);
      using var yolo = new Yolo(yoloOptions);

      // Load image from disk
      using var image = new Bitmap(imagePath);

      // Run object detection
      //var results = yolo.Predict(image);
      var results = yolo.RunObjectDetection(image);

      // Display detected objects
      foreach (var result in results)
      {
         Console.WriteLine($"Detected: {result.Label} - Confidence: {result.Confidence}");
         Console.WriteLine($"Bounding Box: {result.BoundingBox}");
      }
   }
   catch (Exception ex)
   {
      Console.WriteLine($"Error: {ex.Message}");
   }
}

This didn’t compile so tried a different approach

I manually modified the code removing ConfidenceThreshold and IoUThreshold, then used intellisense to “discover” then add ModelType & modelPath

static void Main(string[] args)
{
   string modelPath = "yolov8.onnx"; // Replace with your actual model path
   string imagePath = "image.jpg"; // Replace with your actual image path
...
   try
   {
      var yoloOptions = new YoloOptions
      {
         ModelType = ModelType.ObjectDetection,
         OnnxModel = modelPath
      };

      // Load the YOLO model
      //using var yolo = new Yolo(modelPath);
      //using var yolo = new Yolo(yoloOptions);
      //using var yolo = new Yolo(modelPath, yoloOptions);
      using var yolo = new Yolo(yoloOptions);

      // Load image using SkiaSharp
      using var skBitmap = SKBitmap.Decode(imagePath);

      // Convert SKBitmap to a format YOLO can process
      using var skImage = SKImage.FromBitmap(skBitmap);
      using var skData = skImage.Encode(SKEncodedImageFormat.Jpeg, 100);
      using var memoryStream = new MemoryStream(skData.ToArray());
      //var results = yolo.Predict(memoryStream);
      var results = yolo.RunObbDetection(skImage);

      // Display detected objects
      foreach (var result in results)
      {
         Console.WriteLine($"Detected: {result.Label} - Confidence: {result.Confidence}");
         Console.WriteLine($"Bounding Box: {result.BoundingBox}");
      }
   }
   catch (Exception ex)
   {
      Console.WriteLine($"Error: {ex.Message}");
   }
}

The code compiled and ran but didn’t work because YoloDoNet assumed that my computer had CUDA support

static void Main(string[] args)
{
   string modelPath = "yolov8.onnx"; // Replace with your actual model path
   string imagePath = "image.jpg"; // Replace with your actual image path
...
         try
         {
            var yoloOptions = new YoloOptions
            {
               ModelType = ModelType.ObjectDetection,
               OnnxModel = modelPath,
               Cuda = false
            };

            // Load the YOLO model
            //using var yolo = new Yolo(modelPath);
            //using var yolo = new Yolo(yoloOptions);
            //using var yolo = new Yolo(modelPath, yoloOptions);
            using var yolo = new Yolo(yoloOptions);

            // Load image using SkiaSharp
            using var skBitmap = SKBitmap.Decode(imagePath);

            // Convert SKBitmap to a format YOLO can process
            using var skImage = SKImage.FromBitmap(skBitmap);
            using var skData = skImage.Encode(SKEncodedImageFormat.Jpeg, 100);
            using var memoryStream = new MemoryStream(skData.ToArray());
            //var results = yolo.Predict(memoryStream);
            var results = yolo.RunObbDetection(skImage);

            // Display detected objects
            foreach (var result in results)
            {
               Console.WriteLine($"Detected: {result.Label} - Confidence: {result.Confidence}");
               Console.WriteLine($"Bounding Box: {result.BoundingBox}");
            }
         }
         catch (Exception ex)
         {
            Console.WriteLine($"Error: {ex.Message}");
         }
      }
   }
}

The code compiled and ran but didn’t work because I had accidentally used oriented bounding box rather than the object detection.

I was getting sick of prompts so manually fixed SKImage loading, RunObjectDetection and then removed the “cruft”.

static void Main(string[] args)
{
   string modelPath = "yolov8s.onnx"; // Replace with your actual model path
   string imagePath = "sports.jpg"; // Replace with your actual image path
...
   try
   {
      var yoloOptions = new YoloOptions
      {
         ModelType = ModelType.ObjectDetection,
         OnnxModel = modelPath,
         Cuda = false
      };

      using var yolo = new Yolo(yoloOptions);

     using var skImage = SKImage.FromEncodedData(imagePath);

      var results = yolo.RunObjectDetection(skImage);

      foreach (var result in results)
      {
         Console.WriteLine($"Detected: {result.Label} - Confidence: {result.Confidence:F2}");
         Console.WriteLine($"Bounding Box: {result.BoundingBox}");
      }
   }
   catch (Exception ex)
   {
      Console.WriteLine($"Error: {ex.Message}");
   }

   Console.WriteLine("Press Enter to exit the application");
   Console.ReadLine();
}

I tested the implementation with sample “sports” image from the YoloSharp Github repository

The console application output looked reasonable

Classification

My initial Copilot prompt

Don’t understand why reference to OpenCV was included

static void Main(string[] args)
{
   string modelPath = "path/to/yolo_model.onnx"; // Update with actual model path
   string imagePath = "path/to/image.jpg"; // Update with actual image path

   // Load YOLO model
   var yolo = new YoloModel(modelPath);

   // Load image
   using Bitmap image = new Bitmap(imagePath);

   // Run classification
   var results = yolo.Predict(image);

   // Display results
  foreach (var result in results)
  {
      Console.WriteLine($"Detected: {result.Label} with confidence {result.Confidence:P}");
   }
}

The code didn’t compile so I prompted the code be modified to use SkiaSharp which is used by YoloDoNet

This was a bit strange, so I tried again

I was getting sick of prompts so manually fixed SKImage loading, RunClassification and then removed the “cruft”.

static void Main(string[] args)
{
   string modelPath = "yolov8s-cls.onnx";  // Update with actual model path
   string imagePath = "pizza.jpg";        // Update with actual image path

   var yolo = new Yolo(new YoloOptions()
   {
      ModelType = ModelType.Classification,
      OnnxModel = modelPath,
      Cuda = false
   });

   // Load image
   using SKImage image = SKImage.FromEncodedData(imagePath);

   // Run classification
   var results = yolo.RunClassification(image);

   // Display results
   foreach (var result in results)
   {
      Console.WriteLine($"Detected: {result.Label} with confidence {result.Confidence:P}");
   }

   Console.WriteLine("Press Enter to exit the application");
   Console.ReadLine();
}

At this point the code compiled and ran

Pretty confident this i a picture of a pizza

Pose

My Copilot prompt

Replace, path/to/yolo_model.onnx, and path/to/image.jpg with the actual paths to your model files and input image

This example assumes that YoloDotNet V2 supports the loaded YOLO model. Verify compatibility with the YOLO ObjectDetection variant.

Copilot had “assumed” I meant Ultralytics Yolo V2 and the code didn’t compile. So, I tried again without V2.

At this point I gave up

Summary

Using Copilot prompts to generate code which uses Ultralytics YoloV8 and YoloDoNet didn’t go well. In addition Visual Studio intellisense often struggled to offer useful options.

Overall, the experience wasn’t great.

Building Cloud AI with AI-YoloSharp Object detection Azure HTTP Trigger function

Introduction

Using Copilot to generate the code to retrieve an image from a security camera and small applications for object detection, object classification, with YoloSharp also went surprisingly well. An Azure HTTP Trigger function is a bit more complex so I decided to do one next. I started with the Visual Studio 2022 generated code

using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;

namespace YoloObjectDetectionFunction
{
    public class Function1
    {
        private readonly ILogger<Function1> _logger;

        public Function1(ILogger<Function1> logger)
        {
            _logger = logger;
        }

        [Function("Function1")]
        public IActionResult Run([HttpTrigger(AuthorizationLevel.Function, "get", "post")] HttpRequest req)
        {
            _logger.LogInformation("C# HTTP trigger function processed a request.");
            return new OkObjectResult("Welcome to Azure Functions!");
        }
    }
}

I then generated the first version of the code with Copilot desktop

The code looked “okay” but had lots of extra “usings”, used FunctionName vs Function attributes etc. so didn’t compile.

I think the “var yolo = new Yolo(modelPath)” comes from YoloDotNet by Niklas Swärd and “var detections = yolo.Predict(image)” is from YoloV8.Net by Samuel Stainback

using System.IO;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.PixelFormats;
using SixLabors.ImageSharp.Processing;
using YoloSharp;
using YoloSharp.DataStructures;

public static class YoloObjectDetectionFunction
{
    private static readonly string modelPath = "path/to/your/yolo-model.onnx";

    [FunctionName("YoloObjectDetection")]
    public static async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequestMessage req,
        ILogger log)
    {
        log.LogInformation("Yolo Object Detection function processed a request.");

        // Read the image from the request
        byte[] imageBytes = await req.Content.ReadAsByteArrayAsync();
        using (Image<Rgba32> image = Image.Load<Rgba32>(imageBytes))
        {
            // Initialize YoloSharp
            using (var yolo = new Yolo(modelPath))
            {
                // Detect objects in the image
                var detections = yolo.Predict(image);

                // Process the detections
                foreach (var detection in detections)
                {
                    // Draw bounding boxes on the image
                    image.Mutate(ctx => ctx.DrawPolygon(Rgba32.Red, 2, detection.BoundingBox.ToPolygon()));
                }

                // Save the modified image to a memory stream
                using (var ms = new MemoryStream())
                {
                    image.SaveAsPng(ms);
                    ms.Seek(0, SeekOrigin.Begin);

                    // Return the modified image as a response
                    var result = new HttpResponseMessage(HttpStatusCode.OK)
                    {
                        Content = new StreamContent(ms)
                    };
                    result.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("image/png");
                    return new FileStreamResult(ms, "image/png");
                }
            }
        }
    }
}

I manually fixed all the issues in the code with a lot of assistance from Visual Studio 2022 Intellisense

using System.Net;

using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;

using SixLabors.ImageSharp;
using SixLabors.ImageSharp.PixelFormats;
using SixLabors.ImageSharp.Processing;
using SixLabors.ImageSharp.Drawing.Processing;

//using YoloSharp;
//using YoloSharp.DataStructures;
using Compunet.YoloSharp;


public static class YoloObjectDetectionFunction
{
   //private static readonly string modelPath = "path/to/your/yolo-model.onnx";
   private static readonly string modelPath = "yolov8s.onnx";

   //[FunctionName("YoloObjectDetection")]
   [Function("YoloObjectDetection")]
   public static async Task<IActionResult> Run(
       [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequestMessage req,
       ILogger log)
   {
      log.LogInformation("Yolo Object Detection function processed a request.");

      // Read the image from the request
      byte[] imageBytes = await req.Content.ReadAsByteArrayAsync();
      using (Image<Rgba32> image = Image.Load<Rgba32>(imageBytes))
      {
         // Initialize YoloSharp
         //using (var yolo = new Yolo(modelPath))
         using (var yolo = new YoloPredictor(modelPath))
         {
            // Detect objects in the image
            //var detections = yolo.Predict(image);
            var detections = yolo.Detect(image);

            // Process the detections
            foreach (var detection in detections)
            {
               // Draw bounding boxes on the image
               //image.Mutate(ctx => ctx.DrawPolygon(Rgba32.Red, 2, detection.BoundingBox.ToPolygon()));

               var rectangle = new PointF[] {new Point(detection.Bounds.Bottom, detection.Bounds.Left), new Point(detection.Bounds.Bottom, detection.Bounds.Right), new Point(detection.Bounds.Right, detection.Bounds.Top), new Point(detection.Bounds.Left, detection.Bounds.Top)};

               image.Mutate(ctx => ctx.DrawPolygon(Rgba32.ParseHex("FF0000"), 2, rectangle));
            }

            // Save the modified image to a memory stream
            using (var ms = new MemoryStream())
            {
               image.SaveAsPng(ms);
               ms.Seek(0, SeekOrigin.Begin);

               // Return the modified image as a response
               var result = new HttpResponseMessage(HttpStatusCode.OK)
               {
                  Content = new StreamContent(ms)
               };
               result.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("image/png");
               return new FileStreamResult(ms, "image/png");
            }
         }
      }
   }
}

When I posted an image to the function with Telerik Fiddler Classic it threw an exception, because the ILogger injected into the Run method was null.

I then asked Github Copilot to “Modify code so the ILogger is parameter of constructor”

using System.Net;

using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;

using SixLabors.ImageSharp;
using SixLabors.ImageSharp.PixelFormats;
using SixLabors.ImageSharp.Processing;
using SixLabors.ImageSharp.Drawing.Processing;

//using YoloSharp;
//using YoloSharp.DataStructures;
using Compunet.YoloSharp;


//public static class YoloObjectDetectionFunction
public class YoloObjectDetectionFunction
{
   private static readonly ILogger _log;
   //private static readonly string modelPath = "path/to/your/yolo-model.onnx";
   private static readonly string modelPath = "yolov8s.onnx";

   public YoloObjectDetectionFunction(ILogger<YoloObjectDetectionFunction> log)
   {
      _log = log;
   }

   //[FunctionName("YoloObjectDetection")]
   [Function("YoloObjectDetection")]
   //public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequestMessage req, ILogger log)
   public static async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequestMessage req)
   {
      _log.LogInformation("Yolo Object Detection function processed a request.");

      // Read the image from the request
      byte[] imageBytes = await req.Content.ReadAsByteArrayAsync();
      using (Image<Rgba32> image = Image.Load<Rgba32>(imageBytes))
      {
         // Initialize YoloSharp
         //using (var yolo = new Yolo(modelPath))
         using (var yolo = new YoloPredictor(modelPath))
         {
            // Detect objects in the image
            //var detections = yolo.Predict(image);
            var detections = yolo.Detect(image);

            // Process the detections
            foreach (var detection in detections)
            {
               // Draw bounding boxes on the image
               //image.Mutate(ctx => ctx.DrawPolygon(Rgba32.Red, 2, detection.BoundingBox.ToPolygon()));

               var rectangle = new PointF[] {new Point(detection.Bounds.Bottom, detection.Bounds.Left), new Point(detection.Bounds.Bottom, detection.Bounds.Right), new Point(detection.Bounds.Right, detection.Bounds.Top), new Point(detection.Bounds.Left, detection.Bounds.Top)};

               image.Mutate(ctx => ctx.DrawPolygon(Rgba32.ParseHex("FF0000"), 2, rectangle));
            }

            // Save the modified image to a memory stream
            using (var ms = new MemoryStream())
            {
               image.SaveAsPng(ms);
               ms.Seek(0, SeekOrigin.Begin);

               // Return the modified image as a response
               var result = new HttpResponseMessage(HttpStatusCode.OK)
               {
                  Content = new StreamContent(ms)
               };
               result.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("image/png");
               return new FileStreamResult(ms, "image/png");
            }
         }
      }
   }
}

When I posted an image to the function it threw an exception, because content of the HttpRequestMessage was null.

I then asked Github Copilot to “Modify the code so that the image is read from the form”

// Read the image from the form
var form = await req.ReadFormAsync();
var file = form.Files["image"];
if (file == null || file.Length == 0)
{
   return new BadRequestObjectResult("Image file is missing or empty.");
}

When I posted an image to the function it returned a 400 Bad Request Error.

After inspecting the request I realized that the name field was wrong, as the generated code was looking for “image”

Content-Disposition: form-data; name=”image”; filename=”sports.jpg”

Then, when I posted an image to the function it returned a 500 error.

But, the FileStreamResult was failing so I modified the code to return a FileContentResult

using (var ms = new MemoryStream())
{
   image.SaveAsJpeg(ms);

   return new FileContentResult(ms.ToArray(), "image/jpg");
}

Then, when I posted an image to the function it succeeded

But, the bounding boxes around the detected objects were wrong.

I then manually fixed up the polygon code so the lines for each bounding box were drawn in the correct order.

// Process the detections
foreach (var detection in detections)
{
   var rectangle = new PointF[] {
      new Point(detection.Bounds.Left, detection.Bounds.Bottom),
      new Point(detection.Bounds.Right, detection.Bounds.Bottom),
      new Point(detection.Bounds.Right, detection.Bounds.Top),
      new Point(detection.Bounds.Left, detection.Bounds.Top)
 };

Then, when I posted an image to the function it succeeded

The bounding boxes around the detected objects were correct.

I then “refactored” the code, removing all the unused “using”s, removed any commented out code, changed ILogger to be initialised using a Primary Constructor etc.

using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;

using SixLabors.ImageSharp;
using SixLabors.ImageSharp.PixelFormats;
using SixLabors.ImageSharp.Processing;
using SixLabors.ImageSharp.Drawing.Processing;

using Compunet.YoloSharp;

public class YoloObjectDetectionFunction(ILogger<YoloObjectDetectionFunction> log)
{
   private readonly ILogger<YoloObjectDetectionFunction> _log = log;
   private readonly string modelPath = "yolov8s.onnx";

   [Function("YoloObjectDetection")]
   public async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req)
   {
      _log.LogInformation("Yolo Object Detection function processed a request.");

      // Read the image from the form
      var form = await req.ReadFormAsync();
      var file = form.Files["image"];
      if (file == null || file.Length == 0)
      {
         return new BadRequestObjectResult("Image file is missing or empty.");
      }

      using (var stream = file.OpenReadStream())
      using (Image<Rgba32> image = Image.Load<Rgba32>(stream))
      {
         // Initialize YoloSharp
         using (var yolo = new YoloPredictor(modelPath))
         {
            // Detect objects in the image
            var detections = yolo.Detect(image);

            // Process the detections
            foreach (var detection in detections)
            {
               var rectangle = new PointF[] {
                  new Point(detection.Bounds.Left, detection.Bounds.Bottom),
                  new Point(detection.Bounds.Right, detection.Bounds.Bottom),
                  new Point(detection.Bounds.Right, detection.Bounds.Top),
                  new Point(detection.Bounds.Left, detection.Bounds.Top)
               };

               image.Mutate(ctx => ctx.DrawPolygon(Rgba32.ParseHex("FF0000"), 2, rectangle));
            }

            // Save the modified image to a memory stream
            using (var ms = new MemoryStream())
            {
               image.SaveAsJpeg(ms);

              return new FileContentResult(ms.ToArray(), "image/jpg");
            }
         }
      }
   }
}

Summary

The initial code generated by Copilot was badly broken but with the assistance of Visual Studio 2022 Intellisense was fixed fairly quickly. The ILogger not being initialised, me using the “wrong” upload file name were easy to debug, but the FileContentResult exception was a bit more difficult.

It took me a quite a bit longer to write the function with Copilot desktop/Github Copilot than what it would have taken me normally. But, I think a lot of this was due to having to take screen shots, writing this blog post as I went, and having already written several Azure HTTP Trigger function for processing uploaded images.

The Copilot generated code in this post is not suitable for production

IoT and AI Building Edge AI with AI- YoloSharp Client

Introduction

Using Copilot to generate the code to retrieve an image from a security camera went surprisingly well so next step was something a bit more complex. For this post I have used Copilot prompts to generate code which uses Ultralytics YoloV8 and Compunet YoloSharp for object detection, object classification, pose estimation, and instance segmentation

using System;
using System.Drawing;
using YOLOSharp;

class Program
{
    static void Main(string[] args)
    {
        string modelPath = "path/to/yolov8.onnx"; // Replace with the path to your YOLOv8 model
        string imagePath = "path/to/image.jpg"; // Replace with the path to your image

        // Load the YOLOv8 model
        var yolo = new YOLO(modelPath);

        // Load the image from disk
        using (var image = Image.FromFile(imagePath))
        {
            // Run the YOLOv8 model on the image
            var results = yolo.Predict(image);

            // Display the results
            foreach (var result in results)
            {
                Console.WriteLine($"Label: {result.Label}, Confidence: {result.Confidence}, BoundingBox: {result.BoundingBox}");
            }
        }
    }
}
In this example:

Replace path/to/yolov8.onnx with the actual path to your YOLOv8 model file.

Replace path/to/image.jpg with the actual path to your image file.

This application loads the YOLOv8 model using YOLOSharp, loads an image from disk, runs the model on the image, and displays the results, including the label, confidence, and bounding box for each detected object.

Happy coding! 🚀

Copilot had “assumed” I meant object detection(reasonable) with YoloSharp and the code didn’t compile.

Object Detection

After some minor fixes and shifting the configuration to an appsettings file the application compiled.

using Compunet.YoloSharp;
using Microsoft.Extensions.Configuration;
using SixLabors.ImageSharp;

class Program
{
   static void Main(string[] args)
   {
      Model.ApplicationSettings applicationSettings;

      Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss} 0.3.CameraHTTPClient starting");
#if RELEASE
       Console.WriteLine("RELEASE");
#else
      Console.WriteLine("DEBUG");
#endif
      // load the app settings into configuration
      var configuration = new ConfigurationBuilder()
           .AddJsonFile("appsettings.json", false, true)
      .Build();

      applicationSettings = configuration.GetSection("ApplicationSettings").Get<Model.ApplicationSettings>();

      // Load the YOLOv8 model
      var yolo = new YoloPredictor(applicationSettings.ModelPath);

      // Load the image from disk
      using (var image = Image.Load(applicationSettings.ImagePath))
      {
         // Run the YOLOv8 model on the image
         var results = yolo.Detect(image);

         // Display the results
         foreach (var result in results)
         {
            Console.WriteLine($"Label: {result.Name}, Confidence: {result.Confidence}, BoundingBox: {result.Bounds}");
         }
      }
   }
}

I tested the implementation with sample “sports” image from the YoloSharp Github repository

The console application output looked reasonable

I manually compared the output of the console application with the Ultralytics Hub model preview tab and the results looked close enough.

Pose

My Copilot prompt

please write a C# console application that uses Compunet YoloSharp to run a yolov8 pose model on an image loaded from disk
using Compunet.YoloSharp;
using Microsoft.Extensions.Configuration;
using SixLabors.ImageSharp;

class Program
{
   static void Main(string[] args)
   {
      Model.ApplicationSettings applicationSettings;

      Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss} 0.3.CameraHTTPClient starting");
#if RELEASE
       Console.WriteLine("RELEASE");
#else
      Console.WriteLine("DEBUG");
#endif
      // load the app settings into configuration
      var configuration = new ConfigurationBuilder()
           .AddJsonFile("appsettings.json", false, true)
      .Build();

      applicationSettings = configuration.GetSection("ApplicationSettings").Get<Model.ApplicationSettings>();

      // Load the YOLOv8 pose model
      var yolo = new YoloPredictor(applicationSettings.ModelPath);

      // Load the image from disk
      using (var image = Image.Load(applicationSettings.ImagePath))
      {
         // Run the YOLOv8 pose model on the image
         var results = yolo.Pose(image);

         // Display the results
         foreach (var result in results)
         {
            Console.WriteLine($"Label: {result.Name.Name}, Confidence: {result.Confidence}, BoundingBox: {result.Bounds}");
            Console.WriteLine("Keypoints:");
            foreach (var keypoint in result)
            {
               Console.WriteLine($"  - {keypoint.Point}");
            }
         }
      }
   }
}

After some minor fixes and shifting the configuration to an appsettings file the application compiled. I tested the implementation with sample “sports” image from the YoloSharp Github repository

The console application output looked reasonable

I manually compared the output of the console application with the Ultralytics Hub model preview tab and the results were reasonable

Classification

My Copilot prompt

please write a C# console application that uses Compunet YoloSharp to run a yolov8 pose model on an image loaded from disk
using Compunet.YoloSharp;
using Microsoft.Extensions.Configuration;
using SixLabors.ImageSharp;

class Program
{
   static void Main(string[] args)
   {
      Model.ApplicationSettings applicationSettings;

      Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss} 0.3.CameraHTTPClient starting");
#if RELEASE
       Console.WriteLine("RELEASE");
#else
      Console.WriteLine("DEBUG");
#endif

      // load the app settings into configuration
      var configuration = new ConfigurationBuilder()
           .AddJsonFile("appsettings.json", false, true)
      .Build();

      applicationSettings = configuration.GetSection("ApplicationSettings").Get<Model.ApplicationSettings>();

      // Load the YOLOv8 classification model
      var yolo = new YoloPredictor(applicationSettings.ModelPath);

      // Load the image from disk
      using (var image = Image.Load(applicationSettings.ImagePath))
      {
         // Run the YOLOv8 classification model on the image
         var results = yolo.Classify(image);

         // Display the results
         foreach (var result in results)
         {
             Console.WriteLine($"Label: {result.Name.Name}, Confidence: {result.Confidence}");
         }
      }
   }
}

After some minor fixes and shifting the configuration to an appsettings file the application compiled. I tested the implementation with sample “toaster” image from the YoloSharp Github repository

The console application output looked reasonable

I’m pretty confident the input image was a toaster.

Summary

The Copilot prompts to generate code which uses Ultralytics YoloV8 and Compunet YoloSharp and may have produced better code with some “prompt engineering”. Using Visual Studio intellisense the generated code was easy to fix.

The Copilot generated code in this post is not suitable for production

NickSwardh NuGet Nvidia Jetson Orin Nano™ GPU CUDA Inferencing

My Seeedstudio reComputer J3011 has two processors an ARM64 CPU and an Nividia Jetson Orin 8G coprocessor. YoloDotNet by NickSwardh V2 (uses SkiaSharp) was significantly faster when run on the ARM64 CPU so I wanted to try inferencing with the Nividia Jetson Orin 8G coprocessor.

Performance of YoloDotNet by NickSwardh V2 running on the ARM64 CPU

Performance of YoloDotNet by NickSwardh V2 running on the Nividia Jetson Orin 8G with Compute Unified Device Architecture (CUDA) enabled.

Enabling CUDA reduced the total image scaling, pre-processing, inferencing, and post processing time from 115mSec to 36mSec which is a significant improvement.

YoloV8-NuGet Performance ARM64 CPU

To see how the dme-compunet, updated YoloDotNet and sstainba NuGets performed on an ARM64 CPU I built a test rig for the different NuGets using standard images and ONNX Models.

I started with the dme-compunet YoloV8 NuGet which found all the tennis balls and the results were consistent with earlier tests.

The YoloDotNet by NickSwardh NuGet update had some “breaking changes” so I built “old” and “updated” test harnesses.

The YoloDotNet by NickSwardh V1 and V2 results were slightly different. The V2 NuGet uses SkiaSharp which appears to significantly improve the performance.

Even though the YoloV8 by sstainba NuGet hadn’t been updated I ran the test harness just in case

The dme-compunet YoloV8 and NickSwardh YoloDotNet V1 versions produced the same results, but the NickSwardh YoloDotNet V2 results were slightly different.

  • dme-Compunet 291 mSec
  • NickSwardV1 480 mSec
  • NickSwardV2 115 mSecs
  • SStainba 422 mSec

Like in the YoloV8-NuGet Performance X64 CPU post the NickSwardV2 implementation which uses SkiaSharp was significantly faster so it looks like Sixlabors.ImageSharp is the issue.

To support Compute Unified Device Architecture (CUDA) or TensorRT inferencing with NickSwardV2(for SkiaSharp) will need some major modifications to the code so it might be better to build my own YoloV8 Nuget.

YoloV8-NuGet Performance X64 CPU

When checking the dme-compunet, YoloDotNet, and sstainba and NuGets I noticed YoloDotNet readme.md detailed some performance enhancements…

What’s new in YoloDotNet v2.0?

YoloDotNet 2.0 is a Speed Demon release where the main focus has been on supercharging performance to bring you the fastest and most efficient version yet. With major code optimizations, a switch to SkiaSharp for lightning-fast image processing, and added support for Yolov10 as a little extra 😉 this release is set to redefine your YoloDotNet experience:

Changing the implementation to use SkiaSharp caught my attention because in previous testing manipulating images with the Sixlabors.ImageSharp library took longer than expected.

I built a test rig for comparing the performance of the different NuGets using standard images and ONNX Models.

I started with the dme-compunet YoloV8 NuGet which found all the tennis balls and the results were consistent with earlier tests.

dme-compunet test harness image bounding boxes

The YoloDotNet by NickSwardh NuGet update had some “breaking changes” so I built “old” and “updated” test harnesses. The V1 version found all the tennis balls and the results were consistent with earlier tests.

NickSwardh V1 test harness image bounding boxes

The YoloDotNet by NickSwardh NuGet update had some “breaking changes” so there were some code changes but the V1 and V2 results were slightly different.

NickSwardh V2 test harness image bounding boxes

Even though the YoloV8 by sstainba NuGet hadn’t been updated I ran the test harness just in case and the results were consistent with previous tests.

sstainba test harness image bounding boxes

The dme-compunet YoloV8 and NickSwardh YoloDotNet V1 versions produce the same results, but the NickSwardh YoloDotNet V2 results were slightly different. The YoloV8 by sstainba results were unchanged.

  • dme-Compunet 71 mSec
  • NickSwardV1 76 mSec
  • NickSwardV2 33 mSecs
  • SStainba 82mSec

The NickSwardV2 implementation was significantly faster, but I need to investigate the slight difference in the bounding boxes. It looks like Sixlabors.ImageSharp might be the issue.

YoloV8 ONNX – Nvidia Jetson Orin Nano™ Execution Providers

The Seeedstudio reComputer J3011 has two processors an ARM64 CPU and an Nvidia Jetson Orin 8G which can be used for inferencing with the Open Neural Network Exchange(ONNX)Runtime.

Story of Fail

Inferencing worked first time on the ARM64 CPU because the required runtime is included in the Microsoft.ML.OnnxRuntime NuGet

ARM64 Linux ONNX runtime
Microsoft.ML.OnnxRuntime NuGet ARM64 Linux runtime

Inferencing failed on the Nividia Jetson Orin 8G because the CUDA Execution provider and TensorRT Execution Provider for the ONNXRuntime were not included in the Microsoft.ML.OnnxRuntime.GPU.Linux NuGet.

Missing ARM64 Linux GPU runtime

There were Linux x64 and Windows x64 versions of the ONNXRuntime library included in the Microsoft.ML.OnnxRuntime.Gpu NuGet

Microsoft.ML.OnnxRuntime.Gpu NuGet x64 Linux runtime

Desperately Seeking libonnxruntime.so

The Nvidia ONNX runtime site had pip wheel files for the different versions of Python and the Open Neural Network Exchange(ONNX)Runtime.

The onnxruntime_gpu-1.18.0-cp312-cp312-linux_aarch64.whl matched the version of the ONNXRuntime I needed and version of Python on the device..

When the pip wheel file was renamed onnxruntime_gpu-1.18.0-cp312-cp312-linux_aarch64.zip it could be opened, but there wasn’t a libonnruntime.so.

Onnxruntime_gpu-1.18.0-cp312-cp312-linux_aarch64 file listing

Building the TensorRT & CUDA Execution Providers

The ONNXRuntime build has to be done on Nividia Jetson Orin so after installing all the necessary prerequisites the first attempt failed.

bryn@ubuntu:~/onnxruntime/onnxruntime$ ./build.sh --config Release --update --build --build_wheel \
--use_tensorrt --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu \
--tensorrt_home /usr/lib/aarch64-linux-gnu

When in high power mode more cores are used but this consumes more resource when building the ONNXRuntime. To limit resource utilisation --parallel2 was added the command line because the compile process was having “out of memory” failures.

bryn@ubuntu:~/onnxruntime/onnxruntime$ ./build.sh --config Release --update --build --parallel 2 --build_wheel \
--use_tensorrt --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu \
--tensorrt_home /usr/lib/aarch64-linux-gnu

There were some compiler warnings but they appear to be benign.

First attempt at running the application failed because libonnxruntime.so was missing so –build_shared_lib was added to the command line

2024-06-10 18:21:58,480 build [INFO] - Build complete
bryn@ubuntu:~/onnxruntime/onnxruntime$ ./build.sh --config Release --update --build --parallel 2 --build_wheel --use_tensorrt --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu --tensorrt_home /usr/lib/aarch64-linux-gnu --build_shared_lib

When the build completed the files were copied to the runtime folder of the program.

The application could then be configured to use the TensorRT Execution Provider.

Getting CUDA and TensorRT working on the Nvidia Jetson Orin 8G took much longer than I expected, with many dead ends and device factory resets before the process was repeatable.

YoloV8 ONNX – Nvidia Jetson Orin Nano™ CPU & GPU TensorRT Inferencing

The Seeedstudio reComputer J3011 has two processors an ARM64 CPU and an Nividia Jetson Orin 8G. To speed up TensorRT inferencing I built an Open Neural Network Exchange(ONNX) TensorRT Execution Provider. After updating the code to add a “warm-up” and tracking of average pre-processing, inferencing & post-processing durations I did a series of CPU & GPU performance tests.

The testing consisted of permutations of three models TennisBallsYoloV8s20240618640×640.onnx, TennisBallsYoloV8s2024062410241024.onnx & TennisBallsYoloV8x20240614640×640 (limited testing as slow) and three images TennisBallsLandscape640x640.jpg, TennisBallsLandscape1024x1024.jpg & TennisBallsLandscape3072x4080.jpg.

Executive Summary

As expected, inferencing with a TensorRT 640×640 model and a 640×640 image was fastest, 9mSec pre-processing, 21mSec inferencing, then 4mSec post-processing.

If the image had to be scaled with SixLabors.ImageSharp this significantly increased the preprocessing (and overall) time.

CPU Inferencing

GPU TensorRT Small model Inferencing

GPU TensorRT Large model Inferencing

Nvidia Jetson Orin Nano™ JetPack 6

The Seeedstudio reComputer J3011 has two processors an ARM64 CPU and an Nividia Jetson Orin 8G Coprocessor. To speed up ML.NET running on the Nividia Jetson Orin 8G required compatible versions of ML.NET Open Neural Network Exchange (ONNX) and NVIDIA Jetpack.

Before installing NVIDIA Jetpack 6 the Seeedstudio reComputer J3011 Edge AI Device has to be put into recovery mode

Seeedstudio reComputer J3011 Edge AI Device with jumper for recovery mode

When started in recovery mode the Seeedstudio J3011 was in list of Universal Serial Bus (USB) devices returned by lsusb

Upgrading to Jetpack 5.1.1 so the device could be upgraded using the Windows subsystem for Linux terminal failed. The NVIDIA SDK Manager downloads and installs all the required components and dependencies.

Installing NVIDIA Jetpack 6 from the Windows subsystem for Linux failed because the version of Ubuntu installed(Ubuntu 24.02 LTS) was not supported by NVIDIA SDK Manager.

Installing NVIDIA Jetpack 6 from a desktop PC running Ubuntu 24.02 LTS failed (the same issue as above) because the NVIDIA SDK Manager did not support that version of Ubuntu. The desktop PC was then “re-paved” with Ubuntu 22.04 LTS and NVIDIA SDK Manager worked.

An NVIDIA Developer program login is required to launch the NVIDIA SDK Manager

Selecting the right target hardware is important if it is not “auto detected”.

The Open Neural Network Exchange(ONNX) supports the Compute Unified Device Architecture (CUDA) which has to be included in the installation package.

Downloading NVIDIA Jetpack 6 and all the selected components of the install can be quite slow

Installation of NVIDIA Jetpack 6 and selected components can take a while.

Even though Jetpack 6 is now available for Seeed’s Jetson Orin Devices this process is still applicable for an upgrade or “factory reset”.