Thailand’s 1D-ResNeXt Model Revolutionizes Human-Robot Harvesting Collaboration

In the heart of agricultural innovation, a new study is making waves, promising to redefine how humans and robots collaborate in the field. Researchers have developed a cutting-edge deep learning model that can accurately recognize human activities during collaborative harvesting tasks, a breakthrough that could significantly enhance efficiency and safety in agriculture.

The study, led by Sakorn Mekruksavanich from the Department of Computer Engineering at the University of Phayao, Thailand, focuses on human awareness in human-robot interaction. This is a critical aspect, especially in agricultural environments where interactions are enriched by complex contextual information. The team’s novel model, named 1D-ResNeXt, is designed explicitly for recognizing activities in agriculture-related human-robot collaboration. It incorporates feature fusion and a multi-kernel convolutional block strategy, utilizing residual connections and a split–transform–merge mechanism to mitigate performance degradation and reduce model complexity.

The research involved collecting sensor data from twenty individuals using five wearable devices placed on different body parts. Each sensor was embedded with tri-axial accelerometers, gyroscopes, and magnetometers. The participants performed several sub-tasks commonly associated with agricultural labor, such as lifting and carrying loads. The raw sensor signals were pre-processed to eliminate noise before being input into the proposed deep learning network for sequential pattern recognition.

The results were impressive. The chest-mounted sensor achieved the highest F1-score of 99.86%, outperforming other sensor placements and combinations. The study also found that a temporal window size of 0.5 seconds provided the best recognition performance, indicating that key activity features in agriculture can be captured over short intervals. Moreover, the multimodal fusion of accelerometer, gyroscope, and magnetometer data yielded the best accuracy at 99.92%.

“This research is a significant step forward in human-robot collaboration in agriculture,” said Mekruksavanich. “By accurately recognizing human activities, we can improve the efficiency and safety of collaborative tasks, ultimately leading to more productive and cost-effective agricultural practices.”

The implications for the agriculture sector are substantial. With the global population expected to reach 9.7 billion by 2050, the demand for food will increase dramatically. This research could help meet that demand by enhancing the productivity of agricultural labor through intelligent, efficient, and adaptive collaborative systems.

“The combination of accelerometer and gyroscope data offered an optimal compromise, achieving 99.49% accuracy while maintaining lower system complexity,” Mekruksavanich added. “This highlights the importance of strategic sensor placement and data fusion in enhancing activity recognition performance while reducing the need for extensive data and computational resources.”

As the agriculture sector continues to evolve, the integration of advanced technologies like wearable sensors and deep learning models will play a pivotal role. This research, published in the journal ‘Informatics’, not only contributes to the development of intelligent collaborative systems but also offers promising applications in agriculture and beyond, with improved safety, cost-efficiency, and real-time operational capability.

The study’s findings could pave the way for future developments in the field, shaping the next generation of human-robot collaboration systems. As we look to the future, the potential for these technologies to transform the agriculture sector is immense, promising a more sustainable and productive future for all.

Scroll to Top
×