Self-Supervised Learning of Depth-Based Navigation Affordances from Haptic Cues

Jose Baleia, Pedro Santana, Jose Barata

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Citations (Scopus)

Abstract

This paper presents a ground vehicle capable of exploiting haptic cues to learn navigation affordances from depth cues. A simple pan-tilt telescopic antenna and a Kinect sensor, both fitted to the robot's body frame, provide the required haptic and depth sensory feedback, respectively. With the antenna, the robot determines whether an object is traversable by the robot. Then, the interaction outcome is associated to the object's depth-based descriptor. Later on, the robot to predict if a newly observed object is traversable just by inspecting its depth-based appearance uses this acquired knowledge. A set of field trials show the ability of the to robot progressively learn which elements of the environment are traversable.

Original languageEnglish
Title of host publication2014 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC
EditorsN Lau, AP Moreira, R Ventura, BM Faria
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages146-151
Number of pages6
DOIs
Publication statusPublished - 2014
EventIEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) - Espinho, Portugal
Duration: 14 May 201415 May 2014

Conference

ConferenceIEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)
Country/TerritoryPortugal
CityEspinho
Period14/05/1415/05/14

Keywords

  • affordances
  • autonomous robots
  • depth sensing
  • robotic antenna
  • self-supervised learning
  • terrain assessment

Fingerprint

Dive into the research topics of 'Self-Supervised Learning of Depth-Based Navigation Affordances from Haptic Cues'. Together they form a unique fingerprint.

Cite this