Robot Task Learning from Human Demonstration

University dissertation from Stockholm : KTH

Abstract: Today, most robots used in the industry are preprogrammed and require a welldefined and controlled environment. Reprogramming such robots is often a costly process requiring an expert. By enabling robots to learn tasks from human demonstration, robot installation and task reprogramming are simplified. In a longer time perspective, the vision is that robots will move out of factories into our homes and offices. Robots should be able to learn how to set a table or how to fill the dishwasher. Clearly, robot learning mechanisms are required to enable robots to adapt and operate in a dynamic environment, in contrast to the well defined factory assembly line.This thesis presents contributions in the field of robot task learning. A distinction is made between direct and indirect learning. Using direct learning, the robot learns tasks while being directly controlled by a human, for example in a teleoperative setting. Indirect learning, however, allows the robot to learn tasks by observing a human performing them. A challenging and realistic assumption that is decisive for the indirect learning approach is that the task relevant objects are not necessarily at the same location at execution time as when the learning took place. Thus, it is not sufficient to learn movement trajectories and absolute coordinates. Different methods are required for a robot that is to learn tasks in a dynamic home or office environment. This thesis presents contributions to several of these enabling technologies. Object detection and recognition are used together with pose estimation in a Programming by Demonstration scenario. The vision system is integrated with a localization module which enables the robot to learn mobile tasks. The robot is able to recognize human grasp types, map human grasps to its own hand and also evaluate suitable grasps before grasping an object. The robot can learn tasks from a single demonstration, but it also has the ability to adapt and refine its knowledge as more demonstrations are given. Here, the ability to generalize over multiple demonstrations is important and we investigate a method for automatically identifying the underlying constraints of the tasks.The majority of the methods have been implemented on a real, mobile robot, featuring a camera, an arm for manipulation and a parallel-jaw gripper. The experiments were conducted in an everyday environment with real, textured objects of various shape, size and color.

  This dissertation MIGHT be available in PDF-format. Check this page to see if it is available for download.