Education, Science, Technology, Innovation and Life
Open Access
Sign In

Vision Recognition and Positioning Optimization of Industrial Robots Based on Deep Learning

Download as PDF

DOI: 10.23977/jaip.2024.070207 | Downloads: 11 | Views: 166

Author(s)

Xiran Su 1

Affiliation(s)

1 Beijing Sineva Robot Technology Co., Ltd, Beijing, 100176, China

Corresponding Author

Xiran Su

ABSTRACT

Visual recognition and positioning optimization of industrial robots play a vital role in automatic production. Aiming at this problem, this study proposes a method of visual recognition and positioning optimization based on deep learning, namely, Multi-Scale Attention-based Deep Learning Visual Localization Network (MSA-DLVN). By introducing a multi-scale attention mechanism, this method can effectively improve the visual perception and positioning accuracy of industrial robots in complex environments. The comparative experiments on real scene data sets show that MSA-DLVN method is significantly superior to traditional methods in visual positioning optimization and workpiece recognition. Specifically, the positioning accuracy of MSA-DLVN method is 1.3cm higher than that of baseline method, and the accuracy of workpiece identification is 9 percentage points higher. In addition, MSA-DLVN method maintains good robustness and universality in different experimental scenarios and data sets. This study provides a reliable solution for industrial robot visual recognition and positioning optimization, which is helpful to promote the development of industrial automation production.

KEYWORDS

Deep learning; Industrial robots; Visual recognition and positioning; MSA-DLVN

CITE THIS PAPER

Xiran Su, Vision Recognition and Positioning Optimization of Industrial Robots Based on Deep Learning. Journal of Artificial Intelligence Practice (2024) Vol. 7: 49-55. DOI: http://dx.doi.org/10.23977/jaip.2024.070207.

REFERENCES

[1] Zhang, X., Zhou, M., Qiu, P., Huang, Y., & Li, J. (2019). Radar and vision fusion for the real-time obstacle detection and identification. Industrial Robot, 46(3), 391-395.
[2] Rajpar, A. H., Eladwi, A. E., Ali, I., & Bashir, M. B. A. (2021). Reconfigurable articulated robot using android mobile device. Journal of Robotics, 2021(3), 1-8.
[3] Hou, X., Ao, W., Song, Q., Lai, J., Wang, H., & Xu, F. (2020). Fusar-ship: building a high-resolution sar-ais matchup dataset of gaofen-3 for ship detection and recognition. Science China Information Sciences, 63(4), 1-19.
[4] Gao, P., Zhao, D., & Chen, X. (2020). Multi-dimensional data modelling of video image action recognition and motion capture in deep learning framework. IET Image Processing, 14(7), 1257-1264.
[5] He, Y., Chen, Y., Hu, Y., & Zeng, B. (2020). Wifi vision: sensing, recognition, and detection with commodity mimo-ofdm wifi. IEEE Internet of Things Journal, 7(9), 8296-8317.
[6] Guan, W., Chen, S., Wen, S., Tan, Z., Song, H., & Hou, W. (2020). High-accuracy robot indoor localization scheme based on robot operating system using visible light positioning. IEEE Photonics Journal, 12(2), 1-16.
[7] Wan, G., Wang, G., Xing, K., Fan, Y., & Yi, T. (2021). Robot visual measurement and grasping strategy for roughcastings: International Journal of Advanced Robotic Systems, 18(2), 715-720.
[8] Jiang, W., Zou, D., Zhou, X., Zuo, G., & Li, H. J. (2020). Research on key technologies of multi-task-oriented live maintenance robots for ultra high voltage multi-split transmission lines. Industrial Robot, 48(1), 17-28.
[9] Algburi, R. N. A., & Gao, H. (2019). Health assessment and fault detection system for an industrial robot using the rotary encoder signal. Energies, 12(14), 2816.
[10] Zhu, C., Yang, J., Shao, Z., & Liu, C. (2022). Vision based hand gesture recognition using 3d shape context. IEEE/CAA Journal of, Automatica Sinica, 8(9), 1600-1613. 

Downloads: 7740
Visits: 204116

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.