|Goals | Program and Speakers | Organizers | Topics|
Dr. Guang-Zhong Yang
Imperial College, London, UK
Director, The Hamlyn Centre for Robotic Surgery
- Keynote: ''Surgical Vision for Robotically Assisted and Minimally Invasive Surgery''
- Abstract: Robotic surgery is a rapidly developing field in recent years with recognised commercial growth and an increasing range of operations including orthopaedic, abdominal, urological, colorectal and cardiothoracic interventions. The use of robot assistance has helped to realise many of the potentials of minimally invasive surgery with improved consistency, safety and accuracy. At the same time, the need to perform delicate surgical procedures safely in tight spaces where the surgeon cannot see directly has created a growing demand on surgical vision, reconstruction and navigation techniques. This keynote lecture outlines major technical challenges, as well as new research opportunities and clinical applications of surgical vision for robotically assisted and minimally invasive surgery. It will first provide a detailed classification of minimally invasive surgical procedures based on different access routes to the target operative anatomy, with example abdominal procedures illustrating extraluminal, intraluminal and transluminal approaches. This will then lead to a historical overview of the development of robotic surgery platforms and how surgical vision and navigation have played a role for practical clinical applications. The lecture will provide a critical analysis of existing surgical vision techniques at different stages of minimally invasive surgery including access (to a body cavity or intraluminal site), dissection (to expose the operative site), destruction (using focused energy delivery devices or dissection instruments for ablation, resection or excision), and reconstruction. Unmet clinical requirements and new research opportunities will be discussed and the talk will also highlight parallel developments in biomimetic robotic systems integrated with in situ, in vivo imaging and sensing towards the future evolution of medical robotics. It will cover the latest developments in fully articulated, bio-inspired robot platforms that facilitate intra-luminal or extra-luminal anatomical curved pathway navigation with integrated sensing, vision and navigation. New applications of surgical vision for providing implicit dynamic active constraints under the learning-from-demonstration framework, surgical workflow analysis, and real-time prospective augmented reality using 'inverse realism' will be discussed. The lecture will also address issues concerning effective, natural human-robot interface, as well as the use of vision for integrating cellular and molecular imaging to an in vivo, in situ setting to allow for real-time tissue characterisation, functional assessment, and intraoperative guidance.
- Biography: Professor Guang-Zhong Yang (FREng, FIEEE, FIET, FAIMBE) is director and co-founder of the Hamlyn Centre for Robotic Surgery, Deputy Chairman of the Institute of Global Health Innovation, Imperial College London, UK. Professor Yang also holds a number of key academic positions at Imperial - he is Director and Founder of the Royal Society/Wolfson Medical Image Computing Laboratory, co-founder of the Wolfson Surgical Technology Laboratory, Chairman of the Centre for Pervasive Sensing. He is a Fellow of the Royal Academy of Engineering, fellow of IEEE, IET, AIMBE and a recipient of the Royal Society Research Merit Award and listed in The Times Eureka 'Top 100' in British Science. Professor Yang's main research interests are in medical imaging, sensing and robotics. In imaging, he is credited for a number of novel MR phase contrast velocity imaging and computational modelling techniques that have transformed in vivo blood flow quantification and visualisation. These include the development of locally focused imaging combined with real-time navigator echoes for resolving respiratory motion for high-resolution coronary-angiography, as well as MR dynamic flow pressure mapping for which he received the ISMRM I. I Rabi Award. He pioneered the concept of perceptual docking for robotic control, which represents a paradigm shift of learning and knowledge acquisition of motor and perceptual/cognitive behaviour for robotics, as well as the field of Body Sensor Network (BSN) for providing personalized wireless monitoring platforms that are pervasive, intelligent, and context-aware. He has published over 300 peer-reviewed publications, edited over 10 books/conference proceedings, numerous research/best paper awards, and a large research grant portfolio from the UK/EU funding bodies, research charities, and industrial sources.
Dr. Adrien Bartoli
Universite' d'Auvergne, France
- Talk: ''Shape-from-Template in Gynecologic Laparoscopy''
- Abstract: The computer vision problem of 3D surface reconstruction from images finds potential applications in laparoscopy. I will first present a recent latest method to perform dense 3D reconstruction from a single image and a rigid 3D shape template, under several types of simple 3D deformations. I will then show how this type of methods may be applied in gynecologic laparoscopy using the uterus as an example.
- Web: http://isit.u-clermont1.fr/~ab
- Biography: Adrien Bartoli has been a Professor of computer science at Universite d'Auvergne (Clermont-Ferrand, France) since 2009. He is currently leading the ALCoV group on image science and laparoscopy.
Dr. Nassir Navab
Technische Universitat Munchen, Munchen, Germany
Computer Aided Medical Procedures and Augmented Reality
- Keynote: ''Relevance-based data fusion and visualization for Computer Assisted Interventions''
- Abstract: In this talk I will focus on the needs for imaging and visualization and in particular multi-modal fusion to be defined based on the clinical relevance. The challenge however is to understand and model such clinical relevance. This needs not only to be done through extensive interaction with physicians but also by taking advantages of most advanced machine learning, imaging and visualization methodologies. The current providers of interventional imaging solutions either fill in the operating rooms with multitude of displays presenting each modality on a dedicated display or simply propose a blending of such data, which turns often to be none optimized for given clinical procedure or particular phases of its workflow. Here, I first talk about the challenges our scientific community faces for developing intra-operative multi-modal patient and process specific imaging and the important role that robotic imaging will play in the near future. I present then our recent results towards developing solutions, which not only bring in the pre-operative diagnosis and planning data through rigid, affine and sometimes deformable registration, but also enabling new patient and process specific intra-operative functional and anatomical imaging. I then show some of our latest results on relevance-based data fusion and visualization and discuss the challenges the computer vision, robotics and medical imaging community are facing as they move more actively into the OR.
- Biography: Nassir Navab is a full professor and director of the institute for Computer Aided Medical Procedures (CAMP) at Technical University of Munich (TUM) with a secondary faculty appointment at its Medical School. He is also acting as Chief Scientific Officer for SurgicEye (http://www.surgiceye.com). In November 2006, he was elected as a member of board of directors of MICCAI society. He has served on the Steering Committee of the IEEE Symposium on Mixed and Augmented Reality between 2001 and 2008. He is the author of hundreds of peer reviewed scientific papers and over 40 US and international patents. He has received Siemens Inventor of the Year award in 2001, and SMIT technology award in 2010 and co-authored many awarded papers in prestigious conferences
Dr. Christophe Doignon
University of Strasbourg, France
- Talk: ''Active endoscopic vision with structured light : From surgical instrument tracking to internal organs surface reconstruction''
- Abstract: In this talk, I shall address both the in vivo surgical instrument positioning and organs reconstruction by means of passive and active vision. Herein, active is thought for active sensors, like those performed with structured light, a technique that allows the surface reconstruction of poorly or non-textured scene areas. While the instrument positioning is achieved with a combination of marker-based passive methods and simple spots beams, the coded structured light with complex pattern generation is involved for dealing with deformable regions of interest. With the codewords uniquely associated with visual primitives of the projected pattern, the correspondence problem is quickly solved by means of local information only, with robustness against disturbances like high surface curvatures, partial occlusions and out-of field of view or out-of-focus. Real-time instrument positioning and real-time organs surface reconstruction with one-shot is possible with sub-perfect maps based patterns, where the encoding is done in a single pattern using spatial neighbourhood and epipolar geometry (see Figure below).
Figure: (left) projection of a (30x30) pattern (2 symbols, minimal Hamming distance=2) onto the liver and the abdominal wall of a pig. (right) : different viewpoints and renderings for the depth map.
- Biography: Christophe Doignon received the B.S. degree in Physics in 1987 and the Engineer diploma in 1989 both from the Ecole Nationale Superieure de Physique de Strasbourg, France. He received the Ph.D. degree in Computer Vision and Robotics from the Louis Pasteur University, Strasbourg, France in 1994. In 1995 and 1996, he worked with the department of Electronics and Computer Science at Padua University, Italy, for the European Community under HCM program ''Model Based Analysis of Video Information''. In 1996, he joined the Louis Pasteur University as assistant professor in computer engineering and automation. He's now Professor at Telecom Physique, Strasbourg University. His major research interests include computer vision, medical imaging, real-time image processing, visual servoing and robot control.
Dr. Joachim Hornegger
University Erlangen Nuremberg, Germany
- Talk: ''Time-of-Flight Imaging in Minimal Invasive Surgery''
- Abstract: In close collaboration with Richard Wolf GmbH we have access to the first prototype of a Time-of-Flight/RGB 3-D endoscope. In a joint research project with the German Cancer Research Center in Heidelberg we perform research on applications to enhance conventional endoscopic intervention using 3-D surface information. In my talk I will point out the advantages of Time-of-Flight endoscopes and describe the implemented calibration technique. I will also present a preprocessing pipeline and I will show a first medical application of this novel device.
- Biography: Joachim Hornegger graduated in computer science and received his Ph.D. degree in Applied Computer Science (1996) at the University of Erlangen-Nuremberg (Germany). His Ph.D. thesis was on statistical learning, recognition and pose estimation of 3D objects. Joachim was a visiting scholar and lecturer at Stanford University (Stanford, CA, USA) in the academic year 1997/98. In 1998 he joined Siemens Medical Solutions Inc. where he was working on 3D angiography. In November 2001 Joachim was promoted to the director of medical image processing, and in March 2003 director of imaging systems. In parallel to his responsibilities in industry he was a lecturer at the Universities of Erlangen-Nuremberg (1998-1999), Eichstatt-Ingolstadt (2000), and Mannheim (2000-2003). Joachim is the author and coauthor of more than 150 scientific publications including a monography on applied pattern recognition and a book on statistical object recognition. Besides his education in computer science, in 2003 Joachim also achieved a diploma in Advanced Management (Cross Functional and General Management, Entrepreneurship, Accounting and Controlling) from the Fuqua School of Business (Duke University, NC, USA) and Siemens. In October 2003 Joachim became professor of Medical Imaging Processing at the University of Erlangen-Nuremberg and since October 2005 he is a chaired professor heading the Institute of Pattern Recognition. Joachim Hornegger is also professor of the Medical Faculty of the University of Erlangen-Nuremberg. His main research topics are currently medical image processing, medical vision, and pattern recognition. He is a member of IEEE Computer Society and GI.
Dr. Gregory Hager
The Johns Hopkins University, Baltimore, USA
Computational Interaction and Robotics Lab (CIRL)
- Keynote: ''Quantitative Endoscopy''
- Abstract: T.B.A.
- Biography: Gregory D. Hager is a Professor and Chair of Computer Science at Johns Hopkins University and the Deputy Director of the NSF Engineering Research Center for Computer Integrated Surgical Systems and Technology. His research interests include time-series analysis of image data, image-guided robotics, medical applications of image analysis and robotics, and human-computer interaction. He is currently a member of the governing board of the International Federation of Robotics Research and the Council of the CRA Computing Community Consortium. In 2006, he was elected a fellow of the IEEE for his contributions in Vision-Based Robotics.
Dr. Jose Maria Montiel
Universidad de Zaragoza, Spain
- Talk: ''In-vivo Validated Visual SLAM for Hand-Held Monocular Endoscope''
- Abstract: Simultaneous sensor Location and Mapping, SLAM, from monocular endoscope sequences is one of the most researched topics in Minimal Invasive Surgery. SLAM can provide 3D models for the observed cavity and also can enable augmented reality. The focus of the talk is the extensive experimental validation over human in-vivo sequences. Fifteen real in-vivo human laparoscopic ventral hernia repair interventions were recorded, additionally accurate ground-truth distances per each procedure were also registered. The sequences were processed with the, popular in robotics, monocular EKF-SLAM algorithm tailored to deal with medical endoscope image sequences. The analysis of the experimental results allows to concluded that SLAM is: 1- not invasive, because only a standard monocular endoscope and a surgical tool are used; 2- convenient, because only a hand-controlled exploratory motion is needed; 3- fast, because the algorithm provides the 3D map and the trajectory in real time; 4- accurate, because it has been validated with respect to ground-truth; and 5- robust to inter-patient variability, because it has performed successfully over the validation sequences.
- Biography: Jose Maria Martinez Montiel.- Full Professor (2012) Departamento de Informatica e Ingenieria de Sistemas (Computer Science Dpt. at the Universidad de Zaragoza). He has been postdoctoral research visitor at Oxford University and Imperial College London. His is the main researcher in grant supported by National and European public funding bodies. He is also involved in transferring 3D vision research results in industrial products. In the last five years his interests include SLAM from endoscope sequences, what has lead to SLAM in non rigid scenes where the scene non-rigidity is modeled by means of FEM (Finite Element Method). He has co-authored numerous research papers published in the most prestigious conferences and journals in the field such as Int. Conf. on Computer Vision and Pattern Recognition, Int. Conf. on Robotics and Automation (finalist 'ICRA Best Vision Paper Award 2009' and winner in 'ICRA Best Vision Paper Award 2010'), Robotics Science and Systems Conference (RSS), IEEE Transactions on Robotics or International Journal of Computer Vision.
Dr. Dan Stoyanov
University College London, London, UK
- Talk: ''Surgical Vision: Instrument Detection and Model Based Localization''
- Abstract: Methods for detecting and localising the surgical instruments in minimally invasive surgery images are important for advanced robotic assisted interventions. While the robotic control system provides information about the tool's position in the robot coordinate frame this can have inherent inaccuracies due to the mechanical system. Vision sensors are currently a promising approach for determining the robotic instrument's position in the coordinate frame of the surgical camera. In this talk, I will describe our recent work on a vision algorithm for localising the instrument's pose in 3D leaving only rotation in axis of the tool's shaft as an ambiguity. The method is based on probabilistic supervised classification and an energy minimisation algorithm for localising the pose of a prior 3D model of the instrument within a level set framework. Preliminary results on in vivo data from MIS with traditional laparoscopic and also robotic instruments indicate that the approach is promising for use on clinical data.
- Biography: Dan Stoyanov is a Royal Academy of Engineering/EPSRC Research Fellow at the Centre for Medical Image Computing (CMIC) and Department of Computer Science, University College London (UCL). His main research interests are in the development of surgical vision techniques for real-time measurements from the surgical site and higher level scene understanding. The applications of these methods are towards enhanced image guidance and control in robotic assisted surgery, enabling in vivo biophotonic imaging modalities and providing information for objective surgical skill evaluation and analysis.
Dr. Luc Soler
IRCAD, Strasbourg, France
- Talk: ''Intraoperative Augmented Reality Assisted Surgery''
- Abstract: Augmented reality is a key element of the future of minimally invasive surgery. We will illustrate applications of several different technics based on manual registration and tracking, automatic optical and electromagnetic tracking, automatic feature tracking and patient specific simulation. We will show that Interactive Augmented Reality, based on manual registration and tracking, can be an interesting first step of this development allowing for detecting the surgical area of application of the future automated augmented reality. Tools tracking can be sufficient to provide a great benefit in a flexible endoscope repositioning, the intraoperative video being synchronized to a previous endoscopic navigation. In another way, feature tracking is an efficient technics to detect and track point of interest in the laparoscopic view, allowing first to efficiently track laparoscopic camera movement without external tracking system, and secondly organ movement and deformation. Finally, patient specific simulation can provides a real time deformation of organs that can be highly accurate to predict internal anatomical structures position and shape during several breath cycles. By combining these technics, the next step will aim to propose an accurate and automatic augmented reality in laparoscopic liver surgery, which remains one of the main challenging objectives due to large and fast organ deformation due to the surgeon gesture.
- Biography: Luc Soler was born on 6th October 1969. In 1999, he was valedictorian for the magister at the Higher Education Computer Science School of the Paris University. He obtained his PhD in Computer Sciences in 1998. Since 1999, he is a research project manager in computer sciences and robotics at the Research Institute against Digestive Cancer (IRCAD, Strasbourg). In October 2000, he joined the surgical team of Professor Marescaux as professor associated at the Medical Faculty of Strasbourg. His main areas of interest are medical image processing, 3D modelling, virtual and augmented reality, surgical robotics and abdominal anatomy. In 1999, his research work has been awarded with a Computer World Smithsonian Award, in 2003 with the first World Summit Award in the eHealth category, in 2004 with the ''Best Vision Paper'' of IEEE Robotics and automation society, in 2005 with the 2nd international award of the ''Sensable Developer Challenge'' and in 2006 with the ''Le monde de l'informatique'' trophy in the Health category. In 2008 and 2009 he won the first place of the MICCAI/Kitware Best Biomedical Visualization award.
Dr. Jonathan Sorger
- Talk: ''T.B.A.''
% - Abstract: % Augmented reality is a key element of the future of minimally invasive surgery. We will illustrate applications of several different technics based on manual registration and tracking, automatic optical and electromagnetic tracking, automatic feature tracking and patient specific simulation. % We will show that Interactive Augmented Reality, based on manual registration and tracking, can be an interesting first step of this development allowing for detecting the surgical area of application of the future automated augmented reality. Tools tracking can be sufficient to provide a great benefit in a flexible endoscope repositioning, the intraoperative video being synchronized to a previous endoscopic navigation. % In another way, feature tracking is an efficient technics to detect and track point of interest in the laparoscopic view, allowing first to efficiently track laparoscopic camera movement without external tracking system, and secondly organ movement and deformation. Finally, patient specific simulation can provides a real time deformation of organs that can be highly accurate to predict internal anatomical structures position and shape during several breath cycles. By combining these technics, the next step will aim to propose an accurate and automatic augmented reality in laparoscopic liver surgery, which remains one of the main challenging objectives due to large and fast organ deformation due to the surgeon gesture. %
% - Biography: Luc Soler was born on 6th October 1969. In 1999, he was valedictorian for the magister at the Higher Education Computer Science School of the Paris University. He obtained his PhD in Computer Sciences in 1998. Since 1999, he is a research project manager in computer sciences and robotics at the Research Institute against Digestive Cancer (IRCAD, Strasbourg). % In October 2000, he joined the surgical team of Professor Marescaux as professor associated at the Medical Faculty of Strasbourg. His main areas of interest are medical image processing, 3D modelling, virtual and augmented reality, surgical robotics and abdominal anatomy. In 1999, his research work has been awarded with a Computer World Smithsonian Award, in 2003 with the first World Summit Award in the eHealth category, in 2004 with the ''Best Vision Paper'' of IEEE Robotics and automation society, in 2005 with the 2nd international award of the ''Sensable Developer Challenge'' and in 2006 with the ''Le monde de l'informatique'' trophy in the Health category. In 2008 and 2009 he won the first place of the MICCAI/Kitware Best Biomedical Visualization award.
|17:10-17:30||-||Discussion and Perspectives|
|Bio: Gian-Luca Mariottini received his Ph.D. in Robotics from the University of Siena, Italy, in 2006. Since 2010 is an Assistant Professor at the CSE Dept. at the University of Texas at Arlington, USA, where he directs the ASTRA Robotics Lab. He held post-doctoral positions at the GRASP Lab (UPENN), at the Georgia Institute of Technology, and at the University of Minnesota. His research interests are in robotic vision, with a particular focus on surgical vision (feature matching, structure from motion, machine learning, etc.) and augmented-reality systems for minimally-invasive surgery.|
|Bio: Peter Mountney received his PhD in Medical Imaging from Imperial College London in 2010 focusing on laparoscopic computer vision for robotics and pre/intra-operative image fusion. In 2011 he joined the interventional team at Siemens Corporate Research working on multi modal image fusion for a wide variety of procedures including cardiac and abdominal surgery. His research interests include feature detection and tracking, pose estimation, structure from motion, SLAM and machine learning.|
|Bio: Nicolas Padoy is an Assistant Professor at the University of Strasbourg, France, since September 2012, holding a Chair of Excellence in medical robotics within the ICube laboratory/MixSurg institute. His research focuses on computer vision, activity recognition and the applications thereof to surgical workflow analysis and human-machine cooperation during surgery. He completed his PhD jointly between the Technische Universitat Munchen (TUM), Germany, and the INRIA in Nancy, France. Subsequently, he was a postdoctoral researcher and later an Assistant Research Professor in the Department of Computer Science at the Johns Hopkins University, Baltimore, USA.|