Self-Supervised Learning of Depth-Based Navigation Affordances from Haptic Cues

Jose Baleia, Pedro Santana, Jose Barata

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Citations (Scopus)


This paper presents a ground vehicle capable of exploiting haptic cues to learn navigation affordances from depth cues. A simple pan-tilt telescopic antenna and a Kinect sensor, both fitted to the robot's body frame, provide the required haptic and depth sensory feedback, respectively. With the antenna, the robot determines whether an object is traversable by the robot. Then, the interaction outcome is associated to the object's depth-based descriptor. Later on, the robot to predict if a newly observed object is traversable just by inspecting its depth-based appearance uses this acquired knowledge. A set of field trials show the ability of the to robot progressively learn which elements of the environment are traversable.

Original languageEnglish
Title of host publication2014 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC
EditorsN Lau, AP Moreira, R Ventura, BM Faria
Number of pages6
Publication statusPublished - 2014
EventIEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) - Espinho, Portugal
Duration: 14 May 201415 May 2014


ConferenceIEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)


  • affordances
  • autonomous robots
  • depth sensing
  • robotic antenna
  • self-supervised learning
  • terrain assessment

Cite this