Sequential Motion Primitives Recognition of Robotic Arm Task via Human Demonstration using Hierarchical BiLSTM Classifier

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Robotics and Automation Letters, 2020, 6, (2), pp. 502-509
Issue Date:
2020-01-01
Filename Description Size
09309330.pdf4.55 MB
Adobe PDF
Full metadata record
IEEE Learning from demonstration (LfD) is an intuitive teaching technology without extensive programming for an operator. In recently LfD research, machine vision is usually used to capture the human-robot interaction. However, it's not reliable during the machining process. In this paper, a novel intuitive high-level kinesthetic teaching technology is proposed by reconstructing the recorded motion information during a human-guided robotic arm. A hierarchical BiLSTM-based machine learning algorithm is proposed in this paper to recognize and segment motion primitives according to the therblig definition. The hybrid sensing interface is used to record and extract the motion features, the velocity profile, force/torque, and gripper information. The motion features, output data via the hybrid sensing interface, are finally used to classify into the target motion primitive by the proposed classifier. The experimental results and comparisons with the state-of-the-art algorithm show that the proposed method can correctly and efficiently synthesize the recorded motion features into a motion primitive sequence. Finally, the recognition results of real-world tasks show that the proposed algorithm can be used to reconstruct the human-guided task and further used to robot command for the KUKA robot. The experimental results of the reconstructed trajectory show that a real-world task can represent and maintain the accuracy in 2.37 mm using the proposed algorithm.
Please use this identifier to cite or link to this item: