Application: Vision Sensor Fusion
Introduction
![]() ![]() ![]() ![]() Vision systems have become popular for remote vision sensing in geographically distributed environments due to the vast amount of information they provide. Mobile agent technology is a salient solution in vision sensor fusion since it increases power efficiency by reducing communication requirements and increases fusion processing by allowing in-situ integration of on-demand visual processing and analysis algorithms. Mobile agents can dynamically migrate between multiple vision sensors and combine necessary sensor data in a manner specific to the requesting system.
Required Packages for Executing Example Code:
![]() ![]() ![]() ![]() In order to run the examples below, the Ch Robot, Ch OpenCV, and Ch GAUL, must first be installed on the controlling computer.
Example 1: Part Localization in Assembly Automation
![]() ![]() ![]() ![]()
Running the code requires the installation of the Ch Robot package availiable here.
Example 2: Tier-Scalable Planetary Reconnaissance
![]() ![]() ![]() ![]() There has been a fundamental shift in remote extraterrestrial planetary reconnaissance from segregated tier reconnaissance methods to an integrated multi-tier and multi-agent hierarchical paradigm. The use of a cooperative multi-tier paradigm requires a flexible architecture that not only provides a mechanism for hardware access but also an agile vision fusion mechanism for vertical and horizontal integration of all vision sensor components. Experiment
This experiment simulates a tier-scalable planetary reconnaissance
mission.
The main experimental objective is to have a mobile robot with
specialized equipment locate and take mineral samples of
desirable rocks.
However, the mobile robot is sensor limited and incapable of
locating a desirable target on its own.
The mobile robot will utilize the visual system of a manipulator
robot exploring the same area and the visual system of an aerial
robot taking topological images in order to choose and localize
acceptable rocks for sampling.
The main purpose of this case study is not on the actual algorithms
used to implement object detection or path planning but to show
how mobile agents can be utilized to integrate information obtained
from remote vision systems.
![]()
Running the code requires the installation of the Ch GAUL and Ch OpenCV packages. http://iel.ucdavis.edu/projects/chgaul http://www.softintegration.com/products/thirdparty/opencv |
Integration Engineering Laboratory | UCD MTU Sandia |