A 3D Vector Field and Gaze Data Fusion Framework for Hand Motion Intention Prediction in Human-Robot Collaboration

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, 00, pp. 5637-5643
Issue Date:
2024-01-01
Full metadata record
In human robot collaboration HRC settings hand motion intention prediction HMIP plays a pivotal role in ensuring prompt decision making safety and an intuitive collaboration experience Precise and robust HMIP with low computational resources remains a challenge due to the stochastic nature of hand motion and the diversity of HRC tasks This paper proposes a framework that combines hand trajectories and gaze data to foster robust real time HMIP with minimal to no training A novel 3D vector field method is introduced for hand trajectory representation leveraging minimum jerk trajectory predictions to discern potential hand motion endpoints This is statistically combined with gaze fixation data using a weighted Naive Bayes Classifier NBC Acknowledging the potential variances in saccadic eye motion due to factors like fatigue or inattentiveness we incorporate stationary gaze entropy to gauge visual concentration thereby adjusting the contribution of gaze fixation to the HMIP Empirical experiments substantiate that the proposed framework robustly predicts intended endpoints of hand motion before at least 50 of the trajectory is completed It also successfully exploits gaze fixations when the human operator is attentive and mitigates its influence when the operator loses focus A real time implementation in a construction HRC scenario collaborative tiling showcases the intuitive nature and potential efficiency gains to be leveraged by introducing the proposed HMIP into HRC contexts The opens ource implementation of the framework is made available at https github com maleenj hmip ros git
Please use this identifier to cite or link to this item: