YoloModelRunner simplifies running YOLO ONNX models in Unity. Just drag & drop your model to get detection, classification, segmentation, or pose results—no tensor parsing, no external dependencies.YoloModelRunner is a Unity plugin that makes running YOLO models in Unity effortless.It removes the need to understand fixed input tensor sizes, complex output tensor structures, and manual post-processing logic. Simply drag and drop your YOLO ONNX model into the ModelRunner, and the plugin automatically handles preprocessing, inference, and output decoding.The plugin returns clean, structured Prediction results including class labels, confidence scores, bounding boxes, segmentation masks, and pose landmarks. It supports Detection, Classification, Segmentation, and Pose YOLO models, including custom-trained models, and works with image, video, and real-time webcam inputs—making it ideal for games, AR/VR, and computer vision applications in Unity.Key FeaturesZero-complexity YOLO integration in UnityNo external dependencies (Unity Inference only)Drag & drop YOLO ONNX model workflowSupports Detection, Classification, Segmentation, and Pose modelsCompatible with custom-trained YOLO modelsUnified Prediction output (class, confidence, bbox, mask, landmarks)Image, Video, and Webcam input sourcesAdjustable confidence and IOU thresholdsCPU, GPU Compute, and GPU Pixel backend supportDemo scenes included for all supported model typesUse CasesReal-time object detection, Classification, Segmentation, Pose Estimation in Unity ProjectGesture DetectionSegment Object From ImageImage MaskingAR/VR object tracking and interactionImage and video analysis toolsPose-based gameplay and motion analysisAI-powered visualization and analyticsRapid prototyping of computer vision features in UnityUnity Version: Unity 2021 LTS or laterML Backend: Unity Inference / Inference Engine (ONNX runtime)Render Pipeline: Built-in, URP, HDRPSupports custom-trained YOLO ONNX modelsSupported Models:YOLO Object DetectionYOLO ClassificationYOLO SegmentationYOLO Pose EstimationWebcam (real-time)Input Sources:ImageVideoBuilt-in confidence filtering & IOU-based NMSProcessing Pipeline:Automatic input resizing & normalizationAutomatic output tensor parsingPose landmarks (if supported)Output:Unified Prediction structureClass index, class label, confidenceBounding boxSegmentation mask (if supported)GPU PixelExecution Backends:CPUGPU ComputeNo external dependencies (Unity official packages only)Dependencies:Separate demo scenes for Detection, Classification, Segmentation, and PoseThis plugin depends on Inference Engine Unity registry package:com.unity.ai.inferenceGo to Window → Package ManagerClick + → Add package by nameEnter: com.unity.ai.inferenceThis package includes documentation and promotional materials generated with AI assistance (ChatGPT, DALL·E). All scripts, logic, and core functionality were manually developed by the author.




