Lighting-invariant Visual Teach and Repeat Using Appearance-based Lidar

نویسندگان

  • Colin McManus
  • Paul Timothy Furgale
  • Braden Stenning
  • Tim D. Barfoot
چکیده

Colin McManus Mobile Robotics Group, University of Oxford, Oxford, UK e-mail: [email protected] Paul Furgale Autonomous Systems Lab, ETH Zürich, Zürich, Switzerland e-mail: [email protected] Braden Stenning and Timothy D. Barfoot Autonomous Space Robotics Lab, University of Toronto Institute for Aerospace Studies, Toronto, Canada e-mail: [email protected], [email protected]

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

UAV Visual Teach and Repeat Using Only Semantic Object Features

We demonstrate the use of semantic object detections as robust features for Visual Teach and Repeat (VTR). Recent CNN-based object detectors are able to reliably detect objects of tens or hundreds of categories in video at frame rates. We show that such detections are repeatable enough to use as landmarks for VTR, without any low-level image features. Since object detections are highly invarian...

متن کامل

Into Darkness: Visual Navigation Based on a Lidar-Intensity-Image Pipeline

Visual navigation of mobile robots has become a core capability that enablesmany interesting applications from planetary exploration to self-driving cars. While systems built on passive cameras have been shown to be robust in well-lit scenes, they cannot handle the range of conditions associated with a full diurnal cycle. Lidar, which is fairly invariant to ambient lighting conditions, offers o...

متن کامل

Towards lighting-invariant visual navigation: An appearance-based approach using scanning laser-rangefinders

In an effort to facilitate lighting-invariant exploration, this paper presents an appearance-based approach using 3D scanning laser-rangefinders for two core visual navigation techniques: visual odometry (VO) and visual teach and repeat (VT&R). The key to ourmethod is to convert raw laser intensity data into greyscale camera-like images, in order to apply sparse, appearance-based techniques tra...

متن کامل

Into Darkness

Visual navigation of mobile robots has become a core capability that enables many interesting applications from planetary exploration to self-driving cars. While systems built on passive cameras have been shown to be robust in well-lit scenes, they cannot handle the range of conditions associated with a full diurnal cycle. Lidar, which is fairly invariant to ambient lighting conditions, offers ...

متن کامل

Image features for visual teach-and-repeat navigation in changing environments

We present an evaluation of standard image features in the context of long-term visual teach-and-repeat navigation of mobile robots, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that for long-term autonomous navigation, the viewpoint-, scaleand rotationinvariance of the standard feature extractors...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • J. Field Robotics

دوره 30  شماره 

صفحات  -

تاریخ انتشار 2013