home   |   research   |   teaching   |   publications   |   biosketch   |   CV

The ASTRA Robotics Lab works in the areas of medical imaging, computer vision, and robotics.
Specific areas of interest are: endoscopic vision, assistive robotics, robot localization and control, vision-based robotics, multi-view geometry of pinhole and panoramic sensors.

Endoscopic Vision

   Fast and Accurate Feature Matching for Laparoscopic Images*
Our goal is to research novel feature-matching methods that, compared to feature tracking, do not make any restrictive assumption about the sequential nature of the two images or about the organ motion.
*In collaboration with Jeffrey Cadeddu, M.D., Department of Urology, UT Southwestern, Dallas
   Long-Term, Accurate Augmented-Reality System for Laparoscopic Videos*
Our goal is to research accurate, robust, and long-term augmented-reality laparoscopic systems, that will enable surgical-guidance data for in-vivo assistance during prostate and kidney cancer surgery.
*In collaboration with Jeffrey Cadeddu, M.D., Department of Urology, UT Southwestern, Dallas
   Augmented Colonoscopy and Vision-based Localization of Flexible Endoscopes*
Our goal is to devise video-based localization algorithms for monocular endoscopes that can cope with deformations and other disruptive video events captured during colonoscopy examinations.
*In collaboration with Pietro Valdastri, STORM Lab, Department of Mechanical Engineering, Vanderbilt University, Nashville
   Stereo Reconstruction for Surgical Endoscopes*
Our goal is to devise robust and accurate methods for stereo 3-D reconstruction from live videos obtained from a surgical endoscopes during live interventions.
*In collaboration with Jeffrey Cadeddu, M.D., Department of Urology, UT Southwestern, Dallas

Assistive Robotics

   Low-Cost Fall Prediction with RGB-D Cameras
This project is about using low-cost RGB-D cameras to extract 3-D human walking (gait) parameters. Gait is indicative of fall risks, and we want to exploit it in order to predict the risk of fall in the elderly.
   Easy-to-Use and Accurate Calibration of RGB-D Cameras from Spheres
Our goal is to calibrate an RGB-D camera such as a Microsoft Kinect or RGB-ToF (Time-of-flight) camera using only spheres. Our novel method is easy-to-use, practical, and accurate.

Single and multi-robot localization (camera, laser and IMU, etc.)

   Laser-IMU Indoor Localization for the visually impaired*
Development of a localization algorithm for blind people which uses laser and IMU mounted on a white cane. Observability of such a nonlinear system is also studied to give insight to the localization alg. (More details here!)
* In cooperation with Joel Hesh and Prof.Stergios Roumeliotis (MARS Lab, UMN, US)
   Vision-based Localization and Control of Multi-Robot Formations*
Observability study for vision-based localization and control of a flock of multi-robots equipped only with on-board panoramic cameras.
* In cooperation with Kostas Daniilidis and George J. Pappas (GRASP Lab, UPENN, USA)
   Uncalibrated Paracatadioptric Video-Compass*
We developed a new geometrical property for the imaging of lines in paracatadioptric cameras. This property is used to design a robust Visual Compass algorithm that computes a closed-form estimate of the camera rotation angle using uncalibrated images.
* In cooperation with Prof. Domenico Prattichizzo (Univ. of Siena, ITALY)
   SWAN system for video-based localization*
In the 4th (last) year of the SWAN project I've joined GaTech and collaborated as PostDoc to design and develop a vision system for the vision-based localization of blind people in a environment with a known 3-D map. (More details here!)
* In cooperation with Prof. Frank Dellaert and Prof. Bruce Walker (GaTech, US)

Vision-based robot navigation

   Image-based Visual Servoing with Central Catadioptric Cameras
We present the auto-epipolar condition for panoramic cameras and use it to design an image-based visual-servo control law which does not need of any 3-D information or estimation process. (Videos of the experiments from IJRR website)
* In cooperation with Prof. Domenico Prattichizzo (Univ. of Siena, ITALY)
   Image-based Visual Servoing for Mobile Robot Using Epipolar Geometry*
Autonomous image-based navigation for nonholonomic robots with partially calibrated camera in unknown environment. (More details here!)
* In cooperation with Prof. Giuseppe Oriolo ("La Sapienza", Rome, ITALY)
   The Epipolar Geometry Toolbox (EGT)
EGT is a free MATLAB toolbox providing a wide set of functions to approach multi-view computer vision problems (modeling, estimation, etc.) for both pinhole and central catadioptric cameras.

3. Medical Applications

   Human-Robotics Interface for Emotional and Cognitive Studies
We designed a HRI to interact with human and cognitive system for possible rehabilitation purposes.
   Eye-tracking for studies on dyslexia
Eye-tracker use for dyslexia studies on children.

3. Computer-Vision and Image Processing Techniques
for Industrial Applications

   3-D Photogrammetry Engine
Measure your environment (house, car accident, etc.) using only two or more pictures!
   Real-time Image Processing
Real-time quality control, video surveillance and statistic form image.

4. Software

   Real-time Computer Vision with OpenCV
Development of functions and users-guide for approaching multiple-view estimation problems.


Now some robots I use to work with.
  PUMA (robotic manipulator)
  NOMAD XR-4000 (holonomic robot)
  PIONEER (nonholonomic robot)

File translated from TEX by TTH, version 3.85.
On 28 Oct 2014, 21:21.