Guidance of a Mobile Robot with Environmental Map Using Omnidirectional Image Sensor COPIS (Special Issue on Image Processing and Understanding)
スポンサーリンク
概要
- 論文の詳細を見る
We have proposed a new omnidirectional image sensor COPIS (COnic Projection Image Sensor) for guiding navigation of a mobile robot. Its feature is passive sensing of the omnidirectional image of the environment in real-time (at the frame rate of a TV camera) using a conic mirror. COPIS is a suitable sensor for visual navigation in real world environment with moving objects. This paper describes a method for estimating the location and the motion of the robot by detecting the azimuth of each object in the omnidirectional image. In this method, the azimuth is matched with the given environmental map. The robot can always estimate its own location and motion precisely because COPIS observes a 360 degree view around the robot even if all edges are not extracted correctly from the omnidirectional image. We also present a method to avoid collision against unknown obstacles and estimate their locations by detecting their azimuth changes while the robot is moving in the environment. Using the COPIS system, we performed several experiments in the real world.
- 社団法人電子情報通信学会の論文
- 1993-04-25
著者
-
Yagi Yasushi
Faculty Of Engineering Science Osaka University
-
Yachida Masahiko
Faculty Of Engineering Science Osaka University
-
Nishizawa Yoshimitsu
Faculty of Engineering Science, Osaka University
-
Nishizawa Yoshimitsu
Faculty Of Engineering Science Osaka University
関連論文
- Robust outdoor scene reconstruction by using wearable omnidirectional vision system (マルチメディア・仮想環境基盤)
- Robust outdoor scene reconstruction by using wearable omnidirectional vision system (パターン認識・メディア理解)
- Robust outdoor scene reconstruction by using wearable omnidirectional vision system (コンピュータビジョンとイメージメディア)
- Guidance of a Mobile Robot with Environmental Map Using Omnidirectional Image Sensor COPIS (Special Issue on Image Processing and Understanding)
- Construction Method of Efficient Database for Learning-Based Video Super-Resolution