Random wanderings through Microsoft Azure esp. PaaS plumbing, the IoT bits, AI on Micro controllers, AI on Edge Devices, .NET nanoFramework, .NET Core on *nix and ML.NET+ONNX
The initial comparison running on my development box (will benchmark on my Seeedstudio EdgeBox RPi 200.) was roughly what I was expecting though the SkaiSharp 2560×1440 mean duration was a bit odd. I think that the difference in the amount of memory allocated is because SkaiSharp’s memory is allocated by the native code. Both benchmarks need some refactoring to improve repeatability on my different platforms.
These benchmarks should be treated as indicative not authoritative
Several of my projects use the NickSwardh/YoloDotNetNuGet which supports NVIDIA CUDA but not TensorRT. The first step before “putting the NuGet on a diet” was to fix up my test application because some of the method signatures had changed in the latest release.
// load the app settings into configuration
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", false, true)
.Build();
_applicationSettings = configuration.GetSection("ApplicationSettings").Get<Model.ApplicationSettings>();
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model load start : {_applicationSettings.ModelPath}");
//using (var predictor = new Yolo(_applicationSettings.ModelPath, false))
using var yolo = new Yolo(new YoloOptions()
{
OnnxModel = _applicationSettings.ModelPath,
Cuda = false,
PrimeGpu = false,
ModelType = ModelType.ObjectDetection,
});
{
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model load done");
Console.WriteLine();
//using (var image = await SixLabors.ImageSharp.Image.LoadAsync<Rgba32>(_applicationSettings.ImageInputPath))
using (var image = SKImage.FromEncodedData(_applicationSettings.ImageInputPath))
{
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect start");
var predictions = yolo.RunObjectDetection(image);
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect done");
Console.WriteLine();
foreach (var predicition in predictions)
{
Console.WriteLine($" Class {predicition.Label.Name} {(predicition.Confidence * 100.0):f1}% X:{predicition.BoundingBox.Location.X} Y:{predicition.BoundingBox.Location.Y} Width:{predicition.BoundingBox.Width} Height:{predicition.BoundingBox.Height}");
}
Console.WriteLine();
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} Plot and save : {_applicationSettings.ImageOutputPath}");
using (SKImage skImage = image.Draw(predictions))
{
//await image.SaveAsJpegAsync(_applicationSettings.ImageOutputPath);
skImage.Save(_applicationSettings.ImageOutputPath, SKEncodedImageFormat.Jpeg);
}
}
}
The YoloDotNet code was looking for specific text in the model description which wasn’t present in the description of my Ultralytics Hub trained models.
I downloaded the YoloDotNet source and included the core project in my solution so I could temporarily modify the GetModelVersion method in OnnxPropertiesExtension.cs.
/// <summary>
/// Get ONNX model version
/// </summary>
private static ModelVersion GetModelVersion(string modelDescription) => modelDescription.ToLower() switch
{
var version when version.Contains("yolo") is false => ModelVersion.V8,
var version when version.Contains("yoloV8") is false => ModelVersion.V8, // <========
var version when version.StartsWith("ultralytics yolov8") => ModelVersion.V8,
var version when version.StartsWith("ultralytics yolov9") => ModelVersion.V9,
var version when version.StartsWith("ultralytics yolov10") => ModelVersion.V10,
var version when version.StartsWith("ultralytics yolo11") => ModelVersion.V11, // Note the missing v in Yolo11
var version when version.Contains("worldv2") => ModelVersion.V11,
_ => throw new NotSupportedException("Onnx model not supported!")
};
After getting the test application running in the Visual Studio 2022 debugger it looked like the CustomMetadata Version info would be a better choice.
To check my assumption, I inspected some of the sample ONNX Model properties with Netron.
YoloV8s model 8.1.1
YoloV10s Model – 8.2.5.1
It looks like the CustomMetadata Version info increments but doesn’t nicely map to the Ultralytics Yolo version.
The initial version of the YoloV8.dll in the version 4.2 of the NuGet was 96.5KB. Most of my applications deployed to edge devices and Azure do not require plotting functionality so I started by commenting out (not terribly subtle).
The YoloDotNet by NickSwardh V1 and V2 results were slightly different. The V2 NuGet uses SkiaSharp which appears to significantly improve the performance.
Even though the YoloV8 by sstainba NuGet hadn’t been updated I ran the test harness just in case
YoloDotNet 2.0 is a Speed Demon release where the main focus has been on supercharging performance to bring you the fastest and most efficient version yet. With major code optimizations, a switch to SkiaSharp for lightning-fast image processing, and added support for Yolov10 as a little extra 😉 this release is set to redefine your YoloDotNet experience:
Changing the implementation to use SkiaSharp caught my attention because in previous testing manipulating images with the Sixlabors.ImageSharp library took longer than expected.
I started with the dme-compunet YoloV8 NuGet which found all the tennis balls and the results were consistent with earlier tests.
dme-compunet test harness image bounding boxes
The YoloDotNet by NickSwardh NuGet update had some “breaking changes” so I built “old” and “updated” test harnesses. The V1 version found all the tennis balls and the results were consistent with earlier tests.
NickSwardh V1 test harness image bounding boxes
The YoloDotNet by NickSwardh NuGet update had some “breaking changes” so there were some code changes but the V1 and V2 results were slightly different.
NickSwardh V2 test harness image bounding boxes
Even though the YoloV8 by sstainba NuGet hadn’t been updated I ran the test harness just in case and the results were consistent with previous tests.
The NickSwardV2 implementation was significantly faster, but I need to investigate the slight difference in the bounding boxes. It looks like Sixlabors.ImageSharp might be the issue.
The same approach as the YoloV8.Detect.SecurityCamera.Stream sample is used because the image doesn’t have to be saved on the local filesystem.
Utralytics Pose Model marked-up image
To check the results, I put a breakpoint in the timer just after PoseAsync method is called and then used the Visual Studio 2022 Debugger QuickWatch functionality to inspect the contents of the PoseResult object.
The uses the Microsoft.Extensions.Logging library to publish diagnostic information to the console while debugging the application.
Visual Studio 2022 QuickWatch displaying object detection results.
To check the results I put a breakpoint in the timer just after DetectAsync method is called and then used the Visual Studio 2022 Debugger QuickWatch functionality to inspect the contents of the DetectionResult object.
Visual Studio 2022 JSON Visualiser displaying object detection results.
Security Camera image for object detection photo bombed by Yarnold our Standard Apricot Poodle.
This application can also be deployed as a Linuxsystemd Service so it will start then run in the background. The same approach as the YoloV8.Detect.SecurityCamera.Stream sample is used because the image doesn’t have to be saved on the local filesystem.
The YoloV8.Detect.SecurityCamera.File sample downloads images from the security camera to the local file system, then calls DetectAsync with the local file path.
private static async void ImageUpdateTimerCallback(object state)
{
//...
try
{
Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} YoloV8 Security Camera Image File processing start");
using (Stream cameraStream = await _httpClient.GetStreamAsync(_applicationSettings.CameraUrl))
using (Stream fileStream = System.IO.File.Create(_applicationSettings.ImageFilepath))
{
await cameraStream.CopyToAsync(fileStream);
}
DetectionResult result = await _predictor.DetectAsync(_applicationSettings.ImageFilepath);
Console.WriteLine($"Speed: {result.Speed}");
foreach (var prediction in result.Boxes)
{
Console.WriteLine($" Class {prediction.Class} {(prediction.Confidence * 100.0):f1}% X:{prediction.Bounds.X} Y:{prediction.Bounds.Y} Width:{prediction.Bounds.Width} Height:{prediction.Bounds.Height}");
}
Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss:fff} YoloV8 Security Camera Image processing done");
}
catch (Exception ex)
{
Console.WriteLine($"{DateTime.UtcNow:yy-MM-dd HH:mm:ss} YoloV8 Security camera image download or YoloV8 prediction failed {ex.Message}");
}
//...
}
Console application using camera image saved on filesystem
The ImageSelector parameter of DetectAsync caught my attention as I hadn’t seen this approach used before. The developers who wrote the NuGet package are definitely smarter than me so I figured I might learn something useful digging deeper.
public static DetectionResult Detect(this YoloV8 predictor, ImageSelector selector)
{
predictor.ValidateTask(YoloV8Task.Detect);
return predictor.Run(selector, (outputs, image, timer) =>
{
var output = outputs[0].AsTensor<float>();
var parser = new DetectionOutputParser(predictor.Metadata, predictor.Parameters);
var boxes = parser.Parse(output, image);
var speed = timer.Stop();
return new DetectionResult
{
Boxes = boxes,
Image = image,
Speed = speed,
};
});
public TResult Run<TResult>(ImageSelector selector, PostprocessContext<TResult> postprocess) where TResult : YoloV8Result
{
using var image = selector.Load(true);
var originSize = image.Size;
var timer = new SpeedTimer();
timer.StartPreprocess();
var input = Preprocess(image);
var inputs = MapNamedOnnxValues([input]);
timer.StartInference();
using var outputs = Infer(inputs);
var list = new List<NamedOnnxValue>(outputs);
timer.StartPostprocess();
return postprocess(list, originSize, timer);
}
}
It looks like most of the image loading magic of ImageSelector class is implemented using the SixLabors library…
public class ImageSelector<TPixel> where TPixel : unmanaged, IPixel<TPixel>
{
private readonly Func<Image<TPixel>> _factory;
public ImageSelector(Image image)
{
_factory = image.CloneAs<TPixel>;
}
public ImageSelector(string path)
{
_factory = () => Image.Load<TPixel>(path);
}
public ImageSelector(byte[] data)
{
_factory = () => Image.Load<TPixel>(data);
}
public ImageSelector(Stream stream)
{
_factory = () => Image.Load<TPixel>(stream);
}
internal Image<TPixel> Load(bool autoOrient)
{
var image = _factory();
if (autoOrient)
image.Mutate(x => x.AutoOrient());
return image;
}
public static implicit operator ImageSelector<TPixel>(Image image) => new(image);
public static implicit operator ImageSelector<TPixel>(string path) => new(path);
public static implicit operator ImageSelector<TPixel>(byte[] data) => new(data);
public static implicit operator ImageSelector<TPixel>(Stream stream) => new(stream);
}
Learnt something new must be careful to apply it only where it adds value.
All of the implementations load the model, load the sample image, detect objects in the image, then markup the image with the classification, minimum bounding boxes, and confidences of each object.
Input Image
The first implementation uses YoloV8 by dme-compunet which supports asynchronous operation. The image is loaded asynchronously, the prediction is asynchronous, then marked up and saved asynchronously.
using (var predictor = new Compunet.YoloV8.YoloV8(_applicationSettings.ModelPath))
{
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model load done");
Console.WriteLine();
using (var image = await SixLabors.ImageSharp.Image.LoadAsync<Rgba32>(_applicationSettings.ImageInputPath))
{
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect start");
var predictions = await predictor.DetectAsync(image);
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect done");
Console.WriteLine();
Console.WriteLine($" Speed: {predictions.Speed}");
foreach (var prediction in predictions.Boxes)
{
Console.WriteLine($" Class {prediction.Class} {(prediction.Confidence * 100.0):f1}% X:{prediction.Bounds.X} Y:{prediction.Bounds.Y} Width:{prediction.Bounds.Width} Height:{prediction.Bounds.Height}");
}
Console.WriteLine();
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} Plot and save : {_applicationSettings.ImageOutputPath}");
SixLabors.ImageSharp.Image imageOutput = await predictions.PlotImageAsync(image);
await imageOutput.SaveAsJpegAsync(_applicationSettings.ImageOutputPath);
}
}
dme-compunet YoloV8 test application output
The second implementation uses YoloDotNet by NichSwardh which partially supports asynchronous operation. The image is loaded asynchronously, the prediction is synchronous, the markup is synchronous, and then saved asynchronously.
using (var predictor = new Yolo(_applicationSettings.ModelPath, false))
{
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model load done");
Console.WriteLine();
using (var image = await SixLabors.ImageSharp.Image.LoadAsync<Rgba32>(_applicationSettings.ImageInputPath))
{
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect start");
var predictions = predictor.RunObjectDetection(image);
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect done");
Console.WriteLine();
foreach (var predicition in predictions)
{
Console.WriteLine($" Class {predicition.Label.Name} {(predicition.Confidence * 100.0):f1}% X:{predicition.BoundingBox.Left} Y:{predicition.BoundingBox.Y} Width:{predicition.BoundingBox.Width} Height:{predicition.BoundingBox.Height}");
}
Console.WriteLine();
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} Plot and save : {_applicationSettings.ImageOutputPath}");
image.Draw(predictions);
await image.SaveAsJpegAsync(_applicationSettings.ImageOutputPath);
}
}
nickswardth YoloDotNet test application output
The third implementation uses YoloV8 by sstainba which partially supports asynchronous operation. The image is loaded asynchronously, the prediction is synchronous, the markup is synchronous, and then saved asynchronously.
using (var predictor = YoloV8Predictor.Create(_applicationSettings.ModelPath))
{
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model load done");
Console.WriteLine();
using (var image = await SixLabors.ImageSharp.Image.LoadAsync<Rgba32>(_applicationSettings.ImageInputPath))
{
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect start");
var predictions = predictor.Predict(image);
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} YoloV8 Model detect done");
Console.WriteLine();
foreach (var prediction in predictions)
{
Console.WriteLine($" Class {prediction.Label.Name} {(prediction.Score * 100.0):f1}% X:{prediction.Rectangle.X} Y:{prediction.Rectangle.Y} Width:{prediction.Rectangle.Width} Height:{prediction.Rectangle.Height}");
}
Console.WriteLine();
Console.WriteLine($" {DateTime.UtcNow:yy-MM-dd HH:mm:ss.fff} Plot and save : {_applicationSettings.ImageOutputPath}");
// This is a bit hacky should be fixed up in future release
Font font = new Font(SystemFonts.Get(_applicationSettings.FontName), _applicationSettings.FontSize);
foreach (var prediction in predictions)
{
var x = (int)Math.Max(prediction.Rectangle.X, 0);
var y = (int)Math.Max(prediction.Rectangle.Y, 0);
var width = (int)Math.Min(image.Width - x, prediction.Rectangle.Width);
var height = (int)Math.Min(image.Height - y, prediction.Rectangle.Height);
//Note that the output is already scaled to the original image height and width.
// Bounding Box Text
string text = $"{prediction.Label.Name} [{prediction.Score}]";
var size = TextMeasurer.MeasureSize(text, new TextOptions(font));
image.Mutate(d => d.Draw(Pens.Solid(Color.Yellow, 2), new Rectangle(x, y, width, height)));
image.Mutate(d => d.DrawText(text, font, Color.Yellow, new Point(x, (int)(y - size.Height - 1))));
}
await image.SaveAsJpegAsync(_applicationSettings.ImageOutputPath);
}
}
sstainba YoloV8 test application output
I don’t understand why the three NuGets produced different results which is worrying.