ROBOTICA – LA STORIA
The commercialization of robotic surgery started with the RoboDoc system in 1992-93 as indicated above. In spite of the exceptional performance of RoboDoc, the system went through a prolonged approval process with the Food and Drug Administration (FDA). However for direct surgical manipulation in laparoscopic surgery, the first application was to control the camera in laparoscopic surgery. With initial seed funding from DARPA, Yulun Wang, PhD began developing the Automated Endoscopic System for Optimal Positioning (AESOP) in his newly formed company, Computer Motion, Incorporated. This provided acceptance by the medical and surgical community of robotics as an effective assistive device. This system was the first robotic device to receive FDA approval and launched the robotics in general surgery movement.
During this timeframe, image guided surgery systems began commercialization with both the NeuroMate in Switzerland (figure 12), and Richard Bucholz’s Stealth system. These systems were specifically developed for neurosurgery, as was the General Electric open magnetic resonance imaging (MRI) that was popularized by Ference Jolesz and Ron Kikinis of Brigham Women’s Hospital.
While AESOP was being marketed to the surgical community, Fredrick Moll, MD licensed the SRI Green Telepresence Surgery rights and started Intuitive Surgical, Inc. After extensive redesigning from the ground up, the daVinci surgical system was produced and introduced. In April, 1997, the first robotic surgical (tele-operation) procedure on a patient was performed in Brussels Belgium by Jacques Himpens, MD and Guy Cardiere, MD. Within a year, Computer Motion had put their system, Zeus (figure 14), into production. Both systems are similar, in that they have remote manipulators which are controlled by a surgical work station. One major difference is in the surgical workstations. The daVinci system has stereoscopic image which is displayed just above the surgeon’s hands so it appears as if the surgical instrument tips are an extension of the handles – this gives the impression that the patient is actually right in front of the surgeon (or conversely, that the surgeon’s presence has been transported to right next to the patient – hence the term tele-presence). The Zeus system is ergonomically designed with the monitor comfortably in front of the surgeon’s chair and the instrument handles in the correct eye-hand axis for maximum dexterity. There is no illusion of being at the patient’s side, rather there is the sense of an operation at distant site but with enhanced capabilities. Initially, the daVinci system was the only one with an additional degree of freedom, a “wrist”; however recently the Zeus system has introduced instruments with a wrist.
The concept of dexterity enhancement was suitable for the emerging laparoscopic surgery field, and especially for minimally invasive cardiac surgery applications. Although the original Green Telepresence Surgery system was designed for remote trauma surgery on the battlefield, the commercial telepresence systems were envisioned for delicate cardiac surgery, specifically coronary artery bypass grafting. It was believed that the robotic systems would allow minimally access surgery on the beating heart. This is to be achieved by first blocking and then overpacing the heart and gating the motion of the robotic system to the heart rate. While the minimally access approach has been achieved, the “virtual stillness” of the gating method is still in development.
The challenge of extremely accurate and dexterous robotics was chosen for ophthalmologic surgery, and specifically for laser retinal surgery. The blood vessels on the retina are 25 microns apart. Human performance limits are an accuracy of approximately 200 microns. Stephen Charles, MD of Baptist Hospital and MicroDexterity Systems, Inc (MDS) in Memphis, TN collaborated with a brilliant team at NASA Jet Propulsion Laboratory (JPL), which included Paul Schenker, Hari Das, Edward Barlow and othersa to develop the Robot Assisted MicroSurgery (RAMS) system . This system included 3 basic innovations: 1) eye tracking of the saccades of the eye (200 Hz) so the video image was perfectly still on the video monitor, 2) scaling of 100 to 1, giving the system 10 micron accuracy, and 3) tremor reduction (between 8 – 14 Hz), removing any tremor or inaccuracy. Today, any surgeon could sit down at the microdexterity system and perform laser surgery with 10 micron accuracy, that is 20 times beyond the accuracy of the unaided human hand.
The issue of remote surgery using robotics was limited to short distances because of the latency issue. It was only recently (2001) that the Zeus system was used for a trans-Atlantic robotic surgery operation between New York City and Strasbourg, France by Jacques Marescaux and Michele Gagner. The limitation to long distance surgery is the latency or delay, which cannot exceed 200 msec. At longer delays, the time from the hand motion of the surgeon until the action of the robot’s end effector (instrument) is so great that the tissue could move and the surgeon would cut the wrong structure. In addition with delays greater than 200 msec there is conflict within system such that the robotic system becomes unstable. However, Marescaux and Gagner employed a dedicated high-bandwidth Asynchornous Transfer Mode (ATM) terrestrial fiberoptic cable and were able to conduct the surgery with a delay of only 155 msec. Thus, with very broadband, terrestrial fiber optic cable connection, it is possible to perform remote surgery over thousands of miles. When the Next Generation Internet, with the 45 Mbyte/sec fiber optic cabling, becomes universally available, such remote surgery can become a reality to many places in the world.
CHALLENGES AND FUTURE SYSTEMS
The current systems are just the beginning of the robotics revolution. All of the systems have in common a central workstation from which the surgeon conducts the surgery.
This workstation is the central point which integrates the entire spectrum of surgery . Patient specific pre-operative images can be imported into the surgical workstation for pre-operative planning and rehearsal of a complicated surgical procedure, as is being developed by Jacques Marescaux, MD .
illustrates a patient’s liver with a malignant lesion, and the methods of visualizing, pre-operative planning and procedure rehearsal. At the time of surgery, this image can be imported into the workstation for intra-operative navigation. It can also be used as a stand alone workstation for surgical simulation for training of surgical skills and operative techniques. Thus, on an invisible level, the challenge is going to be to perform the integration of the software of all the different elements, such as importing images, pre-operative planning tools, automatic segmentation and registration for data fusion and image guidance and sophisticated decision and analysis tools to provide automatic outcomes analysis of the surgical procedure.
On a technical side, few if any of the systems include the full range of sensory input (eg. sound, haptics or touch) and there are but a few simple instruments (end effectors). The next generation systems will add the sense of touch, and improved instruments. The instruments will need to be both standard mechanical instruments as well as energy directed instruments such as electrocoagulation, high intensity focused ultrasound, radio-therapy, desiccation, ablation etc. In addition advanced diagnostic systems, such as ultrasound, near infra-red, and confocal microscopy can be mounted on the robotic systems and used for minimally invasive diagnosis. The systems will become smaller, more robust (not require a full time technician) and less expensive. They will adapt for the requirements of the other surgical subspecialties.
In the evolution of the robotics, the systems will become more intelligent, eventually performing most, if not all, of an operative procedure. In current systems such as RoboDoc and NeuroMate, the surgeon preplans the operation on patient specific CT scans. This plan is then programmed into the surgical robot, and the robot performs precisely what the surgeon would have done if (s)he were performing the operation, but with precision and dexterity above human limitations. This is a trend which will continue, with the surgeon planning more and more of the operation which the robot can effectively and efficiently carry out. The robot must be under complete control of the surgeon, in case something unexpected were to occur and the surgeon would take over. It is conceivable that in the distant future under special circumstances such as remote expeditions or the NASA Mission to Mars, that robots would be performing the entire surgical procedure. However in the near future there will be development of hybrid hardware-software systems that will perform complete portions of an operation, such as an anastomosis, nerve grafting, etc.
These systems will require a complicated infrastructure, and the operation room (OR) of the Future will have to accommodate them. The unique requirements for these systems include a very robust information infrastructure, access to information from the outside (such as xrays, images, consultation), voice control of the system by the surgeon, and microminiaturization of the systems. Perhaps there will be an evolution of the OR to resemble more of a “control room” because of the large number of electronics which need to be controlled. An interesting product involved with patient monitoring and control is the Life Support for Trauma and Transport (LSTAT) , which is in essence an entire intensive care unit (ICU). Although the LSTAT was developed by the military as an evacuation system for the battlefield (the “trauma pod” from Robert Heinlein’s “Starship Troopers”), it contains complete monitoring and administration systems, telemedicine capability and can be docked and undocked without removing the patient and is fully compatible with current tele-robotic systems.
A system similar to this may be incorporated into the OR of the Future to facilitate patient anesthesia, surgery and transportation while maintaining continuous monitoring.
There has been speculation about the use of nanotechnology to inject miniscule robots into the blood stream to migrate or be navigated to the target. Numerous concept diagrams show mechanical types of systems that either are controlled by a surgeon or are autonomous. While interesting conceptually, there is little practical understanding of how to actually construct such total, complex systems on a molecular level, and more importantly how to control them. The first generations of these systems will not be visible to the eye, will probably be manufactured chemically by the billions, and will not be controlled but rather, like drug design, be programmed to recognize certain cell or tissue types to deliver medication or cause ablation.
Frequently micro-electro-mechanical systems (MEMS) are discussed in conjunction with nanotechnology, however these systems are one thousand times larger (1.0 x 10-6meters) than nanotechnology (1.0 x 10-9meters). Such systems would be visible as very tiny robots which could be directly controlled by a surgeon. However as the technology scales down in size, it also scales down in power or force which can be generated, making it extremely difficult to actually conduct work at this scale. While there are a number of MEMS robots , none are actually performing any significant work, let alone any activity resembling a surgical procedure. Nevertheless, MEMS and nanotechnology are areas for future potential surgical robotics which will take decades to develop and perfect. It is essential for surgeons to be aware of these technologies, and others such as quantum mechanics, biomimetic materials and systems, tissue engineering and genetic programming, in order to anticipate the great revolution that is developing.
Robotics has established a foothold into surgical practice. Commercial systems have been available for a few years and their value is undergoing stringent scientific evaluation by randomized clinical trials. While the initial reports are promising, it will be necessary for more long term, evidence-based outcomes to prove their efficacy. More importantly, it will be necessary to prove the cost effectiveness in addition to the other non-technical significant issues of accommodating the operating rooms, training of OR personnel and surgeons, and the acceptance of the technology. However, the future is promising because of the great potential of these systems to extend the capabilities of surgical performance beyond human limitations. In addition to the typical robotic systems that are available today, next generation systems, using the emerging MEMS- and nano-technology fields, will extend even further the capabilities. This nascent field will provide fruitful and rewarding research for decades to come, with the promise to greatly improve the quality of surgical care to patients.