Deep Learning Framework for Controlling Work Sequence in Collaborative Human–Robot Assembly Processes

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)
9 Downloads (Pure)

Abstract

The human–robot collaboration (HRC) solutions presented so far have the disadvantage that the interaction between humans and robots is based on the human’s state or on specific gestures purposely performed by the human, thus increasing the time required to perform a task and slowing down the pace of human labor, making such solutions uninteresting. In this study, a different concept of the HRC system is introduced, consisting of an HRC framework for managing assembly processes that are executed simultaneously or individually by humans and robots. This HRC framework based on deep learning models uses only one type of data, RGB camera data, to make predictions about the collaborative workspace and human action, and consequently manage the assembly process. To validate the HRC framework, an industrial HRC demonstrator was built to assemble a mechanical component. Four different HRC frameworks were created based on the convolutional neural network (CNN) model structures: Faster R-CNN ResNet-50 and ResNet-101, YOLOv2 and YOLOv3. The HRC framework with YOLOv3 structure showed the best performance, showing a mean average performance of 72.26% and allowed the HRC industrial demonstrator to successfully complete all assembly tasks within a desired time window. The HRC framework has proven effective for industrial assembly applications
Original languageEnglish
Article number553
Number of pages18
JournalSensors
Volume23
Issue number1
DOIs
Publication statusPublished - 3 Jan 2023

Keywords

  • visual assembly task recognition
  • human–robot collaborative assembly
  • online class detection
  • deep learning

Fingerprint

Dive into the research topics of 'Deep Learning Framework for Controlling Work Sequence in Collaborative Human–Robot Assembly Processes'. Together they form a unique fingerprint.

Cite this