Robot hand - eye coordination using neural
نویسندگان
چکیده
This paper focuses on static hand-eye coordination. The key issue that will be addressed is the construction of a controller that eliminates the need for calibration. Instead, the system should be self-learning and must be able to adapt itself to changes in the environment. In this application, only positional information in the system will be used; hence the above referencèstatic.' Three coordinate domains are used to describe the system: the Cartesian world-domain with elements ~ x = (x; y; z), to describe the position of the target object and the robot end-eeector in world coordinates; the vision domain with elements ~ , to describe the observation of the target object relative to the camera position; and the robot domain with elements ~ known as the joint values of the robot, to describe the position of the robot end-eeector relative to the robot base. The task that is set out to be solved is the following. A robot manipulator has to be positioned directly above a pre-speciied target, such that it can be grasped. The target is speciied in terms of visual parameters. Only the (x; y; z) position of the end-eeector relative to the target is taken into account; this suuces for many pick-and-place problems encountered in industry. (In a number of cases, also the rotation of the hand is of importance, but this rotation can be executed separate from the 3D positioning problem.) Thus the remaining problem is 3 degrees-of-freedom (DoF). Of the robot no explicit model is needed, but only some basic assumptions (explained below) are incorporated. It is known that the robot is controlled by specifying changes of its joint values ~ #, and that thècurrent' joint values ~ can be measured. The ~ # are delta values of ~ ; i.e., # i i. Secondly, from the visual system it is only known that it gives positional information ~ of the observed object. Given the visual data ~ and the robot positional data ~ , a model-free adaptive controller learns to generate robot commands ~ # which position the end-eeector directly above the target object. We are interested in the relation between the visual features (i.e., positions of point features) ~ on the one hand, and joint positions ~ on the other. Traditional model-based hand-eye coordination tends to consider this as two separate problems: rst, a translation from visual to Cartesian domain, followed by a translation …
منابع مشابه
Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection
We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot po...
متن کاملLearning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection
We describe a learning-based approach to handeye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pos...
متن کاملEnhanced Robotic Hand-eye Coordination inspired from Human-like Behavioral Patterns
Robotic hand-eye coordination is recognized as an important skill to deal with complex real environments. Conventional robotic hand-eye coordination methods merely transfer stimulus signals from robotic visual space to hand actuator space. This paper introduces a reverse method: Build another channel that transfers stimulus signals from robotic hand space to visual space. Based on the reverse c...
متن کاملThe Effect of Different Break Activities on Eye-Hand Coordination in Female Students
People face breaks in their daily tasks that effect on their daily life. The present study was designed to evaluate the effect of different break activities on eye-hand coordination in female students. In the current experimental study conducted with repeated measures design, 36 high school female students with age range 13-15 years old were conveniently selected. In order to evaluate participa...
متن کاملGbf Network Architectures for Robot Vision
A versatile robot manipulator is based on techniques of computer vision and neural network learning. For grasping objects four principal tasks have to be done in a cycle | detect the desired object and the grasping ngers in the images, evaluate the spatial relationship wrt. grasping stability, choose a more stable grasping pose (if possible), and move the manipulator to it. In this work we focu...
متن کاملAn Infant Development-inspired Approach to Robot Hand-eye Coordination
This paper presents a novel developmental learning approach for hand-eye coordination in an autonomous robotic system. Robotic hand-eye coordination plays an important role in dealing with realtime environments. Under the approach, infant developmental patterns are introduced to build our robot’s learning system. The method works by first constructing a brain-like computational structure to con...
متن کامل