|本期目录/Table of Contents|

[1]黄昆涛,邹 成,周耀胜*,等.基于语义分割网络的点云焊缝识别研究 [J].武汉工程大学学报,2026,48(02):209-215.[doi:10.19843/j.cnki.CN42-1779/TQ.202505007]
 HUANG Kuntao,ZOU Cheng,ZHOU Yaosheng*,et al. Weld seam recognition from point cloud using a semantic segmentation network [J].Journal of Wuhan Institute of Technology,2026,48(02):209-215.[doi:10.19843/j.cnki.CN42-1779/TQ.202505007]
点击复制

基于语义分割网络的点云焊缝识别研究
(/HTML)

《武汉工程大学学报》[ISSN:1674-2869/CN:42-1779/TQ]

卷:
48
期数:
2026年02期
页码:
209-215
栏目:
智能制造
出版日期:
2026-04-30

文章信息/Info

Title:
Weld seam recognition from point cloud using a semantic segmentation network

文章编号:
1674 - 2869(2026)02 - 0209 - 07
作者:
黄昆涛邹 成周耀胜*陈绪兵刘 希
武汉工程大学机电工程学院,湖北 武汉 430205
Author(s):
HUANG KuntaoZOU ChengZHOU Yaosheng*CHEN XubingLIU Xi
School of Mechanical and Electrical Engineering,Wuhan Institute of Technology,Wuhan 430205,China
关键词:
焊缝识别语义分割网络点云处理机器人路径规划
Keywords:
welding seam recognitionsemantic segmentation networkpoint cloud processingrobot path planning
分类号:
TP391
DOI:
10.19843/j.cnki.CN42-1779/TQ.202505007
文献标志码:
A
摘要:
复杂构件的免示教焊接虽具有自动化程度高、人工成本低的优点,但其成功应用高度依赖于适应性与鲁棒性俱佳的焊缝识别方法。现有的大多数方法在工件繁多或环境恶劣的工况下,识别成功率普遍较低。本文提出一种基于语义分割网络的点云焊缝识别方法。该方法由相机采集工件的灰度图像和三维点云,根据灰度图像,采用DeepLabv3-ResNet50语义分割网络训练焊缝平面提取模型。在模型部署阶段,利用训练得到的焊缝平面提取模型计算焊缝相邻平面的像素坐标,基于像素坐标反向投射回三维点云中,经过平面求交算法计算出三维点云中的焊缝线段。最后通过坐标转换,得到焊缝线段在机器人坐标系下的位置。实验结果表明:生成的轨迹与人工示教轨迹平均偏差为0.208 mm,识别精度为95.3%。相比于传统的点云曲率采样方法和八叉树点云采样方法,本文提出的方法在面对具有多平面工件和强曝光环境中对焊缝的提取具有更高的鲁棒性,且能适配不同环境的焊缝识别。
Abstract:
Teaching-free welding for complex components offers advantages such as a high degree of automation and significant reduction in labor costs.??However, its application heavily relies on weld seam recognition methods with strong adaptability and robustness. Most existing approaches exhibit low recognition success rates in multi-workpiece setups or harsh environment. In this paper, we proposed a weld seam recognition method based on a semantic segmentation network using point cloud data. In this method, a camera acquired grayscale images and 3D point clouds of the workpiece; a DeepLabv3-ResNet50 semantic segmentation network was then trained on these grayscale images to extract the weld plane. During deployment, the trained model predicted the pixel coordinates of the planes adjacent to the weld seam, which were subsequently back-projected into the 3D point cloud. A plane intersection algorithm was applied to compute the weld seam segments from the point cloud.?Finally, coordinate transformation was used to locate these segments in the robot coordinate system. Experimental results showed that generated trajectory has an average deviation of 0.208 mm from the manually taught trajectory, achieving a recognition accuracy of 95.3%. Compared with the traditional point cloud curvature sampling and octree-based sampling methods, this proposed approach exhibits superior robustness when dealing with multi-plane workpieces and highly reflective environments, demonstrating strong adaptability across varied working conditions.

参考文献/References:

[ 1 ] 陈德军.数字制造信息学[M].武汉: 武汉理工大学出版社,2018.
[ 2 ] 周莉莎,李晓帆,张帆,等.基于专利大数据的工业机器人产业发展态势研究[J].机器人技术与应用,2025(3):9-12.
[ 3 ] 熊思淇,陈绪兵,张聪,等.基于MATLAB的6R焊接机器人运动学的仿真研究[J].武汉工程大学学报,2020,42(5):568-574.
[ 4 ] 王仕仙,陈绪兵.焊接轨迹跟踪控制中的深度视觉研究进展[J].武汉工程大学学报,2023,45(4):378-383.
[ 5 ] 谢盛,魏昕,梁梓铭.基于帧间匹配去噪的角接焊缝识别[J].电焊机,2020,50(6):48-53.
[ 6 ] FISCHLER M A, BOLLES R C. Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM,1981,24(6):381-395.
[ 7 ] XU Y L,Lü N,FANG G,et al. Welding seam tracking in robotic gas metal arc welding[J]. Journal of Materials Processing Technology,2017,248:18-30.
[ 8 ] 贾振威.基于结构光立体视觉的相贯线焊缝实时跟踪焊接[D]. 天津:天津工业大学,2020.
[ 9 ] BANAFIAN N,FESHARAKIFARD R, MENHAJ M B. Precise seam tracking in robotic welding by an improved image processing approach[J]. The International Journal of Advanced Manufacturing Technology,2021,114:251-270.
[10] 郭盛威,章秀华,范艳,等.三维重建表面几何特征的提取与参数测量计算[J].武汉工程大学学报,2016,38(2):185-188.
[11] DAI A,CHANG A X, SAVVA M, et al. ScanNet: richly-annotated 3D reconstructions of indoor scenes [C] // 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway,NJ: IEEE,2017: 2432-2443.
[12] BEHLEY J, GARBADE M, MILIOTO A, et al. SemanticKITTI: a dataset for semantic scene understanding of LiDAR sequences [C] // 2019 IEEE/CVF International Conference on Computer Vision. Piscataway,NJ: IEEE,2019: 9296-9306.
[13] GUO Y L,WANG H Y,HU Q Y,et al. Deep learning for 3D point clouds: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2021,43(12):4338-4364.
[14] 李海生,武玉娟,郑艳萍,等.基于深度学习的三维数据分析理解方法研究综述[J].计算机学报,2020,43(1):41-63.
[15] 龚靖渝,楼雨京,柳奉奇,等.三维场景点云理解与重建技术[J].中国图象图形学报,2023,28(6):1741-1766.
[16] 洪汉玉,吴裕强,叶亮,等.基于线结构光扫描的工件高精度三维测量方法[J].武汉工程大学学报,2024,46(1):66-71.
[17] LIU F Q,WANG Z Y,WANG X J,et al. Tacked weld point recognition from geometrical features [C] //Robotic Welding,Intelligence and Automation. Berlin,German: Springer,2015:47-56.
[18] 高兴宇,罗祥雄,李伟明,等.一种基于线结构光条纹特征的抗反光噪声角焊缝识别算法[J].热加工工艺,2025,54(10):37-42.

相似文献/References:

备注/Memo

备注/Memo:
收稿日期:2025-05-09
基金项目:武汉工程大学研究生教育创新基金(CX2024415)
作者简介:黄昆涛,博士,副教授。Email:huangktao@wit.edu.cn
*通信作者:周耀胜,博士,副教授。Email:873556357@qq.com

更新日期/Last Update: 2026-05-07