Robust self-localization and repositioning strategies are essential capabilities for robots operating in highly dynamic environments. Environments are considered highly dynamic, if objects of interest move continuously and quickly, and if chances of hitting or getting hit by other robots are quite s
Embodied categorisation for vision-guided mobile robots
โ Scribed by Nick Barnes; Zhi-Qiang Liu
- Publisher
- Elsevier Science
- Year
- 2004
- Tongue
- English
- Weight
- 757 KB
- Volume
- 37
- Category
- Article
- ISSN
- 0031-3203
No coin nor oath required. For personal study only.
โฆ Synopsis
This paper outlines a philosophical and psycho-physiological basis for embodied perception, and develops a framework for conceptual embodiment of vision-guided robots. We argue that categorisation is important in all stages of robot vision. Further, that classical computer vision is unsuitable for this categorisation, however, through conceptual embodiment, active perception can be e ective. We present a methodology for developing vision-guided robots that applies embodiment, explicitly and implicitly, in categorising visual data to facilitate e cient perception and action. Finally, we present systems developed using this methodology, and demonstrate that embodied categorisation can make algorithms more e cient and robust.
๐ SIMILAR VOLUMES
## A Real-time Computer Vision Platform for Mobile Robot Applications portable platform is described that supports real-time computer vision applications for mobile robots. This platform includes conventional processors, an image processing front-end system, Aand a controller for a pan/tilt/vergen
This paper presents an approach for tracking multiple persons on a mobile robot with a combination of colour and thermal vision sensors, using several new techniques. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlu
A model-based method for indoor mobile robot localization is presented herein; this method relies on monocular vision and uses straight-line correspondences. A classical four-step approach has been adopted (i.e. image acquisition, image feature extraction, image and model feature matching, and camer