Human-Aware Robot Collaborative Task Planning Using Artificial Potential Field and DQN Reinforcement Learning

Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Publication Type:
Journal Article
Citation:
IEEE ACCESS, 2025, 13, pp. 140889-140899
Issue Date:
2025
Full metadata record
This paper presents a novel way for Robot-Robot-Human interaction in a shared workspace for collaborative tasks and uses a multi-modal means of communication that includes hand gestures, voice commands, end-effector gestures, and marker tracking. The system consists of a human operator working along with a Task robot (UR5) and a helper robot (OpenManipulatorX) to perform assembly and disassembly tasks. A Deep Q Network (DQN) reinforcement learning model is used to train the robot to perform the goal reaching task while avoiding obstacles to ensure safety. The DQN algorithm makes use of the end-effector position and the relative positions with the goal and obstacles to train a policy that guides the robot arm safely. Then 4 different training models are created and their ability to avoid obstacles and reach the goal are compared along with the point-to-point Bezier interpolation path planning method in different scenarios such as varying height, size, and number of obstacles. The proposed system has been simulated and then experimentally validated. Experimental results show that DQN trained model performed better than Bezier interpolation in reaching the final goal position with an accuracy of 74mm while avoiding obstacles at the same time in a shared environment. It is also observed that of the different trained models, the model with a larger action space and reduced observation space gave better results compared to others in terms of accuracy and goal completion rate. Also, from experimental data its observed that Improved Artificial Potential Field (IAPF) only took 4.7s as the median time to reach the goal whereas Goal Directed Approach (GDA) took 7.62s and Rapidly Exploring Random Tree Star (RRT*) took 6.22s in different scenarios.
Please use this identifier to cite or link to this item: