New CNN and hybrid CNN-LSTM models for learning object manipulation of humanoid robots from demonstration
MetadataShow full item record
As the environments that human live are complex and uncontrolled, the object manipulation with humanoid robots is regarded as one of the most challenging tasks. Learning a manipulation skill from human Demonstration (LfD) is one of the popular methods in the artificial intelligence and robotics community. This paper introduces a deep learning based teleoperation system for humanoid robots that imitate the human operator's object manipulation behavior. One of the fundamental problems in LfD is to approximate the robot trajectories obtained by means of human demonstrations with high accuracy. The work introduces novel models based on Convolutional Neural Networks (CNNs), CNNs-Long Short-Term Memory (LSTM) models combining the CNN LSTM models, and their scaled variants for object manipulation with humanoid robots by using LfD. In the proposed LfD system, six models are employed to estimate the shoulder roll position of the humanoid robot. The data are first collected in terms of teleoperation of a real Robotis-Op3 humanoid robot and the models are trained. The trajectory estimation is then carried out by the trained CNNs and CNN-LSTM models on the humanoid robot in an autonomous way. All trajectories relating the joint positions are finally generated by the model outputs. The results relating to the six models are compared to each other and the real ones in terms of the training and validation loss, the parameter number, and the training and testing time. Extensive experimental results show that the proposed CNN models are well learned the joint positions and especially the hybrid CNN-LSTM models in the proposed teleoperation system exhibit a more accuracy and stable results.
[email protected] by Yasar University Institutional Repository is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 Unported License..