home   |   research   |   teaching   |   publications   |   biosketch   |   CV

Easy-to-Use and Accurate Calibration
of RGB-D Cameras from Spheres

Aaron Staranowicz, Garrett Brown, Fabio Morbidi, and Gian-Luca Mariottini

Motivations

The ability to recover the intrinsic and extrinsic calibration parameters for a generic rig of RGB-Depth (or RGB-D) sensors' arrangement is paramount in many applications such as visual odometry (see image above), robot navigation, object recognition, augmented reality, etc.
Our goal is to devise efficient solutions for calibrating single and multiple RGB-D sensors (such as a Microsoft Kinect) using only spheres.

Our Contributions

We have created a novel calibration methods for single RGB-D sensors which uses a spherical calibration object. This object is automatically recognized by the camera and tracked over multiple frames. These frames are then used by our method to calibrate the RGB-D sensor.
Figure 1: Figure 1: the first image: Microsoft Kinect, the second image: RGB image with detected ellipse, and third image: depth map with detected sphere.
Our method is easy-to-use (the user only has to move a sphere in front of the camera), practical (no extra equipment or geometric information about the sphere is needed), and accurate (as the method is robust to sphere occlusions and outliers).
We provide a free MATLAB RGB-D calibration toolbox (see below for details on how to download it).

Technical Details

Our calibration method consists of three steps: 1) feature detection, which automatically detects spheres and extracts corresponding features from the RGB and depth images, 2) a least-squares calibration solution is estimated from the centers of the spheres from both RGB and depth images (of at least 6 image pairs), and 3) nonlinear minimization, which refines the intrinsic and extrinsic calibration parameters.
Figure 2: Figure 2: Work-flow diagram of the RGB-D camera calibration algorithm.
As illustrated in the scheme of Figure 2, the RGB-D camera acquires a RGB image and depth map which consists of depth values at each pixel location. The first step in our method (as described above) automatically detects and fits an ellipse in the RGB image and detects and fits a sphere in the depth map. The corresponding feature from both RGB image and depth map is the projected sphere center. These centers are passed to a robust least-squares method to estimate an initial solution (intrinsic and extrinsic calibration parameters). This solution is refined in the nonlinear minimization step. At this step, we also estimate the distortion parameters for both the RGB and depth sensor.
Figure 3: The video of the feature detection and tracking for calibration.
Figure 4: Effects of accurate calibration on RGB-D visual odometry.

RGB-D Calibration Toolbox

Please email Aaron Staranowicz (aaron.staranowicz@mavs.uta.edu) or Gian-Luca Mariottini (gianluca@uta.edu) for instructions on how to receive the matlab code. We are collecting some statistics about users of our Kinect calibration toolbox. Please include answers to these 4 questions in your email:
1) Which University or Company are you affiliated with?
2) How many years have you been working on computer vision related topics?
3) Where did you hear about the calibration toolbox?
4) What kind of final application/project are you planning to use the toolbox for?


Publications

  1. A. Staranowicz and G.L. Mariottini. "A comparative study of calibration methods for Kinect-style cameras". In Proceedings of the 5th International Conference on PErvasive Technologies Related to Assistive Environments (PETRA'12). ACM, New York, NY, USA, Article 49, 4 pages.
  2. A. Staranowicz, G.R. Brown, F. Morbidi, and G.L. Mariottini. "Easy-to-Use and Accurate Calibration of RGB-D Cameras from Spheres." In Proc. 6th Pacific-Rim Symposium on Image and Video Technology, volume 8333, pages 265-278. Springer, 2014.



File translated from TEX by TTH, version 3.85.
On 28 Oct 2014, 21:19.