Vision-based Localization and Control
of Multi-Robot Formations
A growing interest on coordination and control of multiple autonomous agents matured in the last decade.
The formation control problem has been playing an important role in this research area, giving rise to a rich
literature. By formation control we simply mean the problem of controlling the relative position and orientation of
group of robots while allowing the group to move as a whole.
In the leader-follower formation control approach, a robot, the leader, moves along a predefined trajectory while
the other robots, the followers, are to maintain a desired distance and orientation to it (see Fig.1(a))
We are particularly interested in the use of omnidirectional cameras as the only on-board sensor on the robot:
they are cheap, fast and provide a rich set of information about the surrounding environment and other
moving robots. However, monocular vision sensors only provide the line-of-sight (and not the distance) to the observed object
Due to this fact we are interested in studying and solving the corresponding localization problem, i.e. in
estimating the robots' relative distance from the observation of their centroids.
Figure 1: (Left) As the leader moves, the follower must keep a specified formation (i.e. distance and orientation) to the leader; (Right) Robots are only equipped with omnidirectional cameras providing only the line-of-sight to the other robots.
Previous works in the literature assume knowledge about camera calibration, camera height from the floor, etc. thus rendering the localization problem trivial.
As an original contribution, the localization problem is here analytically studied introducing and using a new observability condition
valid for general nonlinear systems, and based on the Extended Output Jacobian.
As an improvement over the existing literature, we do not assume to know any camera calibration parameter
(mirror shape or focal length), nor the pose of any stationary landmark: only the view-angle to the other
robots is provided by each camera, but not the distance, that is estimated by a nonlinear observer (the extended Kalman
filter). An input-state feedback control law is designed to stabilize the formation.
As a second contribution, thanks to our observability condition, we also identify the all set of
leader trajectories that preserves the system observability. An insightful geometrical interpretation on vision-based formation
localizability is also provided.
Extensive simulation and experimental results are also provided.
This is a joint collaboration with Prof. K. Daniilidis and Prof. George J. Pappas from the GRASP Lab, UPENN, USA.
Figure 2: (Left) The Robot trajectories. During straight trajectories the information coming from the camera does not
provide such a rich information to provide a good estimate and the state estimate becomes unconsistent leading to accumulation of the
system modeling error (wheels slippage, unmodeled dynamics, etc.);
(Center-Right) In correspondence of the time instants in which the robot was in an unobservable straight trajectory, the determinant of the Jacobian matrix becomes almost zero and the NEES error increases over it's bounds.
Figure 3: The video of the experiments.
File translated from
version 3.85. On 10 Jan 2009, 09:53.