Jiangsu Researchers Revolutionize Apple Picking with AI-Powered Robots

In the heart of China’s Jiangsu province, researchers are tackling a challenge that could revolutionize the agricultural industry: automating the apple-picking process. Tianzhong Fang, a professor at the College of Automation, Jiangsu University of Science and Technology, has led a team to develop a cutting-edge visual system for apple-harvesting robots, promising to boost efficiency and precision in orchards worldwide.

The team’s work, recently published in the journal Horticulturae (which translates to “Gardening” in English), addresses significant shortcomings in current machine vision systems used in agricultural automation. These systems often struggle with complex orchard environments, where factors like uneven illumination, foliage occlusion, and overlapping fruits can hinder target detection and positioning accuracy.

“Existing vision systems have shown considerable limitations in real-world orchard settings,” Fang explained. “Our goal was to create a more robust system that could accurately identify and position apples, even in challenging conditions.”

The researchers turned to an improved Mask Regional-Convolutional Neural Network (Mask R-CNN) combined with binocular vision to achieve more precise fruit positioning. The binocular camera (ZED2i) mounted on the robot captures dual-channel images of apples. The improved Mask R-CNN then performs instance segmentation of apple targets in these images. Following this, a template-matching algorithm with parallel epipolar constraints is used for stereo matching.

Four pairs of feature points from corresponding apples in the binocular images are selected to calculate disparity and depth, enabling the robot to accurately locate the fruits. The experimental results were impressive, with an average coefficient of variation of 5.09% and positioning accuracy of 99.61% in binocular positioning.

During harvesting operations with a self-designed apple-picking robot, the system demonstrated a single-image processing time of 0.36 seconds, an average single harvesting cycle duration of 7.7 seconds, and a comprehensive harvesting success rate of 94.3%.

The implications of this research are vast. As agricultural automation technologies advance, the demand for efficient and accurate harvesting solutions grows. This visual system could significantly enhance the capabilities of apple-harvesting robots, making them more reliable and effective in real-world orchard environments.

“Our work presents a novel high-precision visual positioning method for apple-harvesting robots,” Fang noted. “This could pave the way for more widespread adoption of automation in agriculture, ultimately leading to increased productivity and reduced labor costs.”

The research not only impacts the agricultural sector but also has broader implications for the energy sector. As automation technologies advance, the need for efficient and reliable energy solutions to power these systems grows. This research could drive innovation in energy storage and management, ensuring that agricultural automation technologies are sustainable and environmentally friendly.

In the quest for more efficient and precise agricultural practices, this research marks a significant step forward. As Tianzhong Fang and his team continue to refine their system, the future of apple harvesting—and perhaps all of agriculture—looks increasingly automated and high-tech.

Scroll to Top
×