Assistant robots are designed to perform specific tasks for the user, but their performance is rarely optimal, hence they are required to adapt to user preferences or new task requirements. In the previous work, the potential of an interactive learning framework based on user intervention and reinforcement learning (RL) was assessed. The framework allowed the user to correct an unfitted segment of the robot trajectory by using hand movements to guide the robot along a corrective path. So far, only the usability of the framework was evaluated through experiments with users. In the current work, the framework is described in detail and its ability to learn from a set of sample trajectories using an RL algorithm is analyzed. To evaluate the learning performance, three versions of the framework are proposed that differ in the method used to obtain the sample trajectories, which are: human-guided learning, autonomous learning, and combined human-guided with autonomous learning. The results show that the combination of the human-guided and autonomous learning achieved the best performance, and although it needed a higher number of sample trajectories than the human-guided learning, it also required less user involvement. Autonomous learning alone obtained the lowest reward value and needed the highest number of sample trajectories.


learning (artificial intelligence), manipulators.

Author keywords

Human-Robot Interaction, Assistive Robots

Scientific reference

A. Jevtić, A. Colomé, G. Alenyà and C. Torras. Robot motion adaptation through user intervention and reinforcement learning. Pattern Recognition Letters, 2017, to appear.