Situated robot learning for multi-modal instruction and imitation of grasping
نویسندگان
چکیده
A key prerequisite to make user instruction of work tasks by interactive demonstration effective and convenient is situated multi-modal interaction aiming at an enhancement of robot learning beyond simple low-level skill acquisition. We report the status of the Bielefeld GRAVIS-robot system that combines visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation to allow multi-modal task-oriented instructions. With respect to this platform, we discuss the essential role of learning for robust functioning of the robot and sketch the concept of an integrated architecture for situated learning on the system level. It has the long-term goal to demonstrate speech-supported imitation learning of robot actions. We describe the current state of its realization to enable imitation of human hand postures for flexible grasping and give quantitative results for grasping a broad range of everyday objects. © 2004 Elsevier B.V. All rights reserved.
منابع مشابه
Learning issues in a multi-modal robot-instruction scenario
One of the challenges for the realization of future intelligent robots is to design architectures which make user instruction of work tasks by interactive demonstration effective and convenient. A key prerequisite for enhancement of robot learning beyond the level of low-level skill acquisition is situated multi-modal communication. Currently, most existing robot platforms still have to advance...
متن کاملIntentionGAN: Multi-Modal Imitation Learning from Unstructured Demonstrations
Traditionally, imitation learning has focused on using isolated demonstrations of a particular skill [3]. The demonstration is usually provided in the form of kinesthetic teaching, which requires the user to spend sufficient time to provide the right training data. This constrained setup for imitation learning is difficult to scale to real world scenarios, where robots have to be able to execut...
متن کاملMulti-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets
Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning fram...
متن کاملTowards Imitation Learning of Grasping Movements by an Autonomous Robot
Imitation learning holds the promise of robots which need not be programmed but instead can learn by observing a teacher. We present recent efforts being made at our laboratory towards endowing a robot with the capability of learning to imitate human hand gestures. In particular, we are interested in grasping movements. The aim is a robot that learns, e.g., to pick up a cup at its handle by imi...
متن کاملHuman-Inspired Grasping of Novel Objects through Imitation Learning
A robotic algorithm capable of grasping novel objects is presented. With a single stereo image as input, a supervised machine learning framework is developed that is both fast and accurate. The algorithm is trained by a human in a learn-by-demonstration procedure where the robot is shown a set of valid end-effector rotations to grasp various objects. Learning is then achieved through a multi-cl...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Robotics and Autonomous Systems
دوره 47 شماره
صفحات -
تاریخ انتشار 2004