YOLO Models Benchmarked for Orange Detection in Ag Robots

In the quest to revolutionize agriculture with autonomous robots, one of the most significant hurdles has been developing visual perception systems that can reliably operate in the unpredictable, real-world environments of farms. A recent study published in *Agriculture* has taken a crucial step toward addressing this challenge by benchmarking state-of-the-art You Look Only Once (YOLO) models for orange detection and segmentation—a task that could dramatically improve the efficiency of agricultural robotics.

The research, led by Caner Beldek of the University of Wollongong, evaluated YOLO-based models not just on accuracy but also on practical deployment factors such as computational efficiency, energy consumption, and robustness to environmental disturbances. This holistic approach provides a clearer picture of which models are best suited for real-world agricultural applications.

“Selecting the right vision model for agricultural robots is not just about achieving high accuracy in controlled settings,” Beldek explained. “It’s about ensuring the model can perform reliably under varying conditions, from fluctuating lighting to weather disturbances, while also being energy-efficient and computationally feasible.”

The study compared five key dimensions: identification accuracy, robustness, model complexity, execution time, and energy consumption. The results revealed that YOLOv5 variants excelled in detection and segmentation accuracy, while YOLOv11-based models demonstrated strong and consistent performance across all disturbance levels, highlighting their robustness. Lightweight architectures were found to be particularly well-suited for resource-constrained operations, and nanoscale models showed promise for meeting real-time and energy-efficient requirements.

One of the more surprising findings was that custom models did not consistently outperform their baseline counterparts, suggesting that off-the-shelf models may already be well-optimized for many agricultural applications. This could have significant commercial implications, as it may reduce the need for costly customizations and accelerate the deployment of autonomous robots in farming.

The study’s findings offer valuable, evidence-based guidelines for developers and companies working on precision agriculture robots. By providing a clear benchmark for YOLO models, the research could help streamline the selection process and ensure that the vision systems powering these robots are both accurate and practical for real-world use.

As the agriculture sector continues to embrace automation and robotics, this research could shape the future of precision farming. By optimizing visual perception systems, farmers may soon have access to more reliable, efficient, and cost-effective robotic solutions, ultimately contributing to sustainable and productive agricultural practices.

The study was led by Caner Beldek of the School of Mechanical, Materials, Mechatronic and Biomedical Engineering at the University of Wollongong, Australia, and published in *Agriculture*.

Scroll to Top
×