Visual Abstract
Abstract
Following early acceptance by urologists, the use of surgical robotic platforms is rapidly spreading to other surgical fields. This empowerment of surgical perception via robotic advances occurs in parallel to developments in intraoperative molecular imaging. Convergence of these efforts creates a logical incentive to advance the decades-old image-guided robotics paradigm. This yields new radioguided surgery strategies set to optimally exploit the symbiosis between the growing clinical translation of robotics and molecular imaging. These strategies intend to advance surgical precision by increasing dexterity and optimizing surgical decision-making. In this state-of-the-art review, topic-related developments in chemistry (tracer development) and engineering (medical device development) are discussed, and future scientific robotic growth markets for molecular imaging are presented.
Today’s robotic surgery paradigm is the direct result of a decades-long drive toward minimally invasive treatment strategies. The enhanced dexterity and ergonomics that lie at the robot’s core have motivated an increasing number of surgical disciplines to pursue robotics, resulting in a global growth market for robotic surgery that already extends to the use of more than 9,000 robotic systems in at least 12 million surgeries so far (1). Currently, the standard is set by the telerobotic da Vinci platform (Intuitive Surgical), which incorporates a stereoscopic fluorescence camera. However, an increasing number of alternative robotic platforms are being, or have already been, translated into the clinical setting (2). The surgical use of robots has also evoked scientific interest; the number of publications related to robotic surgery has shown a steep upward trend since 2000 (2000–2004, 704 publications, vs. 2019–2023, 19,018 publications; search term, “robotic surgery” in PubMed).
In parallel to the development of robotic platforms, the surgical field is benefitting from the rise of intraoperative molecular imaging (IMI), a molecular imaging subdiscipline that enhances surgical perception. Most commonly, this is done by exploiting radioguidance and fluorescence guidance. Perception enhancement promises to improve target identification and, with that, surgical accuracy and oncologic outcomes. Increases in surgical precision can also lead to a reduction in the number and severity of surgically induced complications. Opportunely, the intent of robotic surgery to increase surgical dexterity and IMI to augment surgical perception converge, yielding the image-guided robotics (IGR) subdiscipline of molecular IGR. Target illumination in this subdiscipline is enabled using approved radiopharmaceuticals (e.g., for identification of nodal involvement in oncology (3,4)) and fluorescent dyes (e.g., for visualization of physiologic measures in an anastomosis or lymphangiography (5,6)). In addition, several experimental pharmaceuticals that target, for example, tumor tissue are under investigation. An overarching factor herein is that especially complex interventions that target small lesions have proven to be highly reliant on the insights provided by preoperative road maps based on SPECT/CT or PET/CT images (4). If detailed preoperative information is available, the surgical approach can be preplanned, or the likelihood of intraoperative target identification can even be predicted (7). During surgery, the general emphasis seems to lie in the use of the γ-emitting radioisotope 99mTc and the approved near-infrared dye indocyanine green, but other isotopes and fluorescent dyes have also been successfully used (6,8). Best-of-both-worlds IMI scenarios are offered by approaches that combine the benefits provided by radiopharmaceuticals (quantitative [pharmacokinetic] measures and in-depth detection) with those of fluorescent dyes (video-rate imaging and <1 cm superficial detection (8,9)). For example, the initial successes with hybrid sentinel node approaches in prostate cancer patients (10) have instigated the dissemination of this technique in other robotic-surgery indications such as esophageal (11) and bladder (12) cancer.
In a highly interactive intervention such as surgery, decision-making is based on the surgeon–patient interaction. Here, environmental perception is the root cause behind the surgical actions and is defined by the surgeons’ sensory responses in relation to the patient. This concept can be illustrated using an example provided by Bohg et al. (13) showing that shape recognition based on static imaging provides 49% accuracy in object recognition whereas rotation of 3-dimensional vision enriches perception with a corresponding increase to 72%. Furthermore, addition of tactile information (sensing) was shown to result in an eventual 99% accuracy. These insights in interactive perception can be effectively translated to IGR. To this end, static images allow identification of pharmaceutical- and radiopharmaceutical-avid surgical targets and their intensity of uptake, but to enhance the level of perception the static images require complementary imaging–sensing technologies—for example, in the form of counts/s—that support interactive tissue interpretation. Digitization of multisensory data, combined with artificial intelligence, subsequently offers advantages that pave the way toward a future in which perception-enhanced performance helps realize robotic autonomy, such as via autonomous implementation of image guidance to a level that supersedes that of surgeons.
In this state-of-the-art review, key perception-enhancing components in molecular IGR are addressed because these provide a means to advance the field of minimally invasive robotic surgery via an imaging input (Fig. 1). Key components such as pharmaceuticals and radiopharmaceuticals for target definition, perception enhancement, digitization of data streams, technology assessment, and automation are discussed, and their place in molecular image–guided surgery is emphasized. Translational value is pursued by examining clinical boundary conditions and by defining how these reflect on technologic design choices.
NOTEWORTHY
Advances made in intraoperative molecular imaging need to converge with the parallel technologic thrust in surgery that is driving the implementation of robotics across a broad number of clinical indications.
Improving the precision of surgical interventions means surgeons must be provided with tools (pharmaceuticals and medical devices) that help increase their perception of the surgical environment.
In an analogy to how touch enhances human vision, the surgical value of molecular imaging can be complemented with real-time sensory feedback coming from the robotic manipulators.
To quantify the performance increases that are being realized through surgical guidance, the field may have to look past traditional long-term outcome measures and complement these efforts with kinematic assessments that register improvements in the surgical actions.
Molecular imaging is a crucial component in delivering on the future promise of autonomous surgical robots that are capable of complex decision-making in a dynamic surgical environment.
PHARMACEUTICALS AND RADIOPHARMACEUTICALS FOR TARGET DEFINITION
Perception starts with the ability to separate a target tissue from its surroundings. In IMI, this separation is enabled via the use of pharmaceuticals and radiopharmaceuticals that specifically highlight anatomic or disease-specific features, also called tracers. The development of such pharmaceuticals for surgical purposes finds its origin in radioimmunoguided surgery, a concept that was introduced in the late 1990 s and that covers both receptor-targeted and physiology-based approaches (14). The surgical employment of physiology-based approaches has particularly benefitted from the intrinsic pharmacokinetic properties of clinically approved fluorescent dyes such as indocyanine green and fluorescein (6). Today, the use of fluorescence is actively being expanded toward receptor-targeted applications as well. This generally results in the use of fluorescent analogs of receptor-targeted radiopharmaceuticals that are used in diagnostic nuclear medicine. Although many of these pharmaceuticals show promising results in the preclinical setting, only a handful managed to evolve further. After translation, even fewer break through the commercialization boundary and achieve widescale adoption. Nevertheless, emergence of new imaging pharmaceuticals supports widening of the surgical targets and dissemination of IGR to the indications that currently make up the robotic surgery market (urology, 23%; gynecology, 23%; general surgery, 19%; cardiothoracic surgery, 9%; and other indications, 26% (15); Fig. 2).
The ability to sensitively detect radiopharmaceuticals when applied within a microdosing regimen (≤100 μg/patient (16)) greatly facilitates the translational aspects of radioguided surgery. Studies in nuclear medicine indicate that dosing influences the quality of imaging data, whereby an increase in dosing tends to negatively affect obtained results (17). A word of caution here is that a combination of small lesions, low receptor expression levels, and suboptimal tracer affinities could still result in false-negative outcomes (18). Fluorescent tracer derivatives tend to be more translationally impaired because their inferior detection sensitivity often is compensated for by application of therapeutic dosing regimens (mg/kg) (19). Recent dosing studies with the fluorescent prostate-specific membrane antigen (PSMA)–targeting tracers IS-002 (20) and OTL78 (21) indicate that the use of high doses tends to result in receptor oversaturation, which not only negatively affects signal-to-background ratios but also increases overtreatment (false-positive results).
It can be considered beneficial for surgical perception when diagnostic 3-dimensional images, provided by nuclear medicine, can be substantiated with dynamic intraoperative tracing or imaging findings. The correlation between pre- and intraoperative findings is, among others, supported by the availability of theranostic pairs of pharmaceuticals and radiopharmaceuticals that can be used at the same, or similar, dosing. A textbook example is the use of 68Ga-/18F-PSMA PET for diagnostics and 99mTc-/111In-PSMA for surgical radiotracing (3). For this tracer pair, it was recently shown that the SUVmax on 68Ga-/18F-PSMA PET directly relates to the 99mTc-PSMA signal intensities encountered during surgery (7). Uniquely, the common radiopharmaceutical optimization of the molar activity and specific activity allows accurate lesion identification without oversaturation (18). Important to realize is that intraoperative imaging technologies are not expected to reliably detect lesions not visible in, for example, preoperative PET/CT road maps (22). Following this rationale, theranostic pairs that use somatostatin-targeted neuroendocrine tumor targeting could be used to facilitate the resection of gastroenteropancreatic neuroendocrine tumors with, for example, 68Ga-DOTATOC or 99mTc-EDDA/HYNIC-octreotate (23). Looking ahead, various other diagnostic PET radiopharmaceuticals have 99mTc-containing analogs, indicating that more targets could be exploited for surgical guidance. Examples are 99mTc-pentixafor (target: chemokine receptor-4 expressed on, e.g., glioblastoma (24)), 99mTc-fibroblast antigen protein inhibitor 34 (target: fibroblast antigen protein, involved in >28 different cancer types (25)), 99mTc-folate (target: folate receptor (26)), 99mTc-DB15 (target: gastrin-releasing peptide receptor (27)), 99mTc-IMMU-4Fab′ (target: carcinoembryonic antigen (28)), and a variety of targets and radioisotopes previously pursued in radioimmunoguided surgery initiatives (29). As sensitivity and dosing seem to play a critical role in achieving a correlated pre- and intraoperative accuracy, it will be complex to create well-correlated theranostic pairs made of, for example, PET tracers (microdosing and depth-independent detection) and purely fluorescent tracers (therapeutic dosing and superficial detection only). This is certainly true for small molecules and may be dosing-dependent for monoclonal antibodies, making it a topic in need of further investigation. Hybrid tracers that contain both a radioactive and a fluorescent component, however, seem to provide a logical design strategy in the pursuit of tracers that can directly relate intraoperative fluorescence findings to findings of noninvasive preoperative imaging (9).
PERCEPTION-ENHANCING MODALITIES
Given the fact that tissue tends to move during surgery, static (preoperative) images provide only limited guidance. Technologies that enrich the robots’ perceptual abilities (e.g., sensing and vision) during the surgical act will help enhance the surgeon–patient interaction. To integrate such modalities in the robotic platform, several technical complexities need to be overcome—for example, accessibility of the target, perception of stiffness during surgical maneuvers (i.e., the fulcrum effect (30)), and limited freedom of movement.
Endoscopic vision has played a critical role in the success of surgical robotics. Video image guidance is facilitated via system-integrated endoscopes that provide 3-dimensional video streams of the surgical field. These endoscopes include traditional white-light imaging and, in some cases, fluorescence imaging (e.g., the Firefly cameras on the da Vinci Si, X, Xi, and SP systems [Intuitive] (31), the TIPCAM Rubina [Karl Storz] video endoscope on the HUGO RAS [Medtronic] system (32), and more recently the Versius Plus vLimeLite [CMR Surgical] system). The integration of fluorescence imaging and the ability to identify moving tissues at video rate has instigated a paradigm shift toward the acceptance of fluorescence guidance in surgery. This success underscores how the integration between the robot and perception-enhancing modalities fundamentally determines the usability and impact of a technology.
Perception is optimal when vision is combined with a sense of touch (e.g., palpation, pressure, temperature sensation, or pain). When the surgeon is placed behind a console at a distance from the patient, intentional interactions with the patient are typically made by the robot’s end-effectors, meaning the manipulating instruments. Unfortunately, the tool–patient interactions of, for example, the initial da Vinci platforms were deprived of sensations. The newest da Vinci platform now incorporates tactile feedback. A logical next step in IGR design is to further enrich manipulating instruments with alternative senses of touch (Fig. 3A), allowing them to fulfill a role as a multisensory surrogate for the surgeon’s hands. An advantage of surgical instruments is that they can be technically enhanced to incorporate sensing capabilities that go far beyond human sensory capabilities—for example, sensors that support the detection of pharmaceuticals and radiopharmaceuticals and in effect enable finger-tip molecular sensing. The first steps toward realizing sensory enrichment have already been made with tethered Drop-In ultrasound (33), γ- (34), β- (35), and fiber-confocal (36) probes; and the pursuit of second-generation click-on sensors that can be applied directly onto the surgical tool (Fig. 3B (37,38)). As expected, these sensory readouts demonstrated perception enhancement in applications that also used fluorescence vision (39). Clinical and preclinical evaluations indicate that sensory enrichment can be extended to a variety of molecular imaging signatures, such as β-particles, Raman spectroscopy, mass spectrometry, fluorescence, and fluorescence lifetime (34,40,41).
In contrast to these efforts to enhance the surgeon’s perception during the intervention, off-line back-table (ex vivo) assessments are also increasingly being proposed for IGR (42). Promising examples include the use of Cerenkov imaging (43), confocal systems (36), small-bore PET systems (44), and freehand tissue scanning (45). Although presenting a confirmatory value regarding target removal (42), it remains difficult to relate these measurements to the intraoperative pose of the target.
DIGITIZATION OF DATA STREAMS
Uniquely, surgical robots provide the hardware and the computing power to support data integration. This makes them ideal as digital operating platforms that provide a means to absorb and merge or multiplex multisensory data streams (46), including data streams that are not related to imaging, such as patient history, anesthesiology, and logistics. These inputs can be converted to outputs that highlight findings and send alerts to the surgical staff. When combined with smart algorithms, pre-, intra-, and postsurgical data streams can be processed to unmatched levels of complexity (Fig. 4A). All these then come together in a “control tower” that helps to redefine how surgery is analyzed and could be performed (data–surgeon interaction (47)).
The most straightforward example of imaging–data integration during robotic surgery is the split-screen visualization of preoperatively acquired imaging road maps (e.g., CT, lymphoscintigraphic, or SPECT/CT images) directly next to the surgical video feed (34). This strategy helps to actively relate diagnostic imaging information (static images) to the dynamic surgical environment. This can be further enhanced via the employment of augmented- or mixed-reality visualizations whereby the preoperative images are overlayed onto the surgical video feed (Fig. 4B (48)). Currently, this strategy most widely uses radiologic images (CT and MRI). Nevertheless, there are also examples of nuclear medicine–generated images being displayed over a fluorescence-enhanced video feed (49). Active positional tracking of both instruments and patient anatomy support Global Positioning System–like directional guidance (Fig. 4C). Such navigation strategies require a direct relation between the pose of the robot, the pose of the target during surgery, and the pose of the target during preoperative imaging. As this relation can be realized using rigid landmarks, it is routinely applied during, for example, orthopedic surgery, skull surgery, and neurosurgery (50). Unfortunately, implementation of image-to-patient registration in soft-tissue indications is still hampered by challenges related to deformations caused by positioning, insufflation, breathing, and the tissue movement of the surgical manipulation itself. This stresses the need for confirmatory surgical modalities such as fluorescence imaging or γ-tracing that can be used to correct the navigation accuracy in real time (49). Uniquely, the active tracking of the Drop-In γ-probe during surgery has opened the possibility to register its intraabdominal readout with its positional location. This feature, when complemented by freehand image-reconstruction algorithms, can enable an interaction-facilitated mixed-reality vision enhancement called robot-assisted SPECT (Fig. 4B, right (51)). A tomographic form of digital perception enhancement could in the future also benefit other robotic sensing modalities.
In surgical practice, it often remains challenging to interpret the collected data. When this is done incorrectly, this could lead to a false-negative (missed lesions) or false-positive (overtreatment) readout. In the context of data processing, computer vision algorithms such as artificial intelligence strategies can help support high-end feature extraction. Early examples include anatomy recognition, instrument segmentation (Figs. 4C and 4D) (7), and fluorescence intensity interpretation (52).
TECHNOLOGY ASSESSMENT
The societal incentive for the use of robotics is to offer precision enhancement and a decrease in short- and long-term complications. These features are not defined by first-in-human proof-of-concept data but rather by multivariate health technology assessments performed over a prolonged period (53); health technology assessment is a systematic and multidisciplinary evaluation of the properties of health technologies and interventions covering both their direct and indirect consequences, making it a bridge that connects the world of research to that of policy making. Assessment of the patient benefit embodies traditional outcome measures such as randomized retrospective analysis of databases for complications, quality-adjusted life years, and disease-free survival. For example, quality-adjusted life years have been used to clarify for which indications robotic surgery may (e.g., for prostatectomy (54)) or may not (e.g., for cystectomy (55)) be cost-efficient. The ability to provide high-end evidence on benefits for the patient or the treating physicians not only drives technologic dissemination (56) but also defines the ability to make a healthy business case for a technology. Here, it should be noted that something can be technologically superior but fail during translation simply because of financial reasons. Alternatively, technologies with seemingly poor business cases can make it simply because of financial backing and strong public relations efforts. Currently, we are in a situation in which commercial success has become the best measure of technologic value. When there is healthy competition in a market, one may argue that commercial success is driven by cost, which will ultimately benefit the health care systems and patients. But when technologic availability is limited, commercial interests may not always yield the best patient benefit. For IGR technologies to offer optimal benefit, it is highly desirable, or perhaps even necessary, to come up with objective means to score value. In this respect, shifting focus to the field of IMI immediately exposes a challenge because long-term technologic assessments are rarely reported. Despite countless new concepts, pharmaceuticals, and prototypes that have been evaluated in first-in-human clinical studies, only a handful of IMI procedures has been validated on the basis of outcome measures. Here again, progress is being limited by the fact that companies or investors are reluctant to support 5–10 y of clinical trials. Examples that were able to overcome this barrier are protoporphyrin IX photodynamic diagnostics, sentinel node procedures, and PSMA radioguided surgery procedures (3,10,57), and only the latter two have been evaluated in the context of IGR.
Unfortunately, traditional long-term patient outcome measures do not match well with the speed at which research and development activities is currently being conducted at innovation labs, start-ups, and companies. At the same time, the pursuit of novelty and intellectual property creates the danger that innovations are not validated according to the highest norms and standards, which can ultimately lead to late clinical failure. When assuming that the goal of IGR is to use perception enhancement to advance the surgeon–patient interaction, one can even claim that traditional patient outcome readouts provide an indirect measure for the technologic impact. This can instigate a search for alternative performance assessment strategies. If we look at the way technologic enhancement is assessed in areas such as sports and motor sports, it becomes clear that movement kinematics, conducted during the act, help provide a wealth of quantitative readouts regarding the performance. Because the surgeons’ skills are defined by dexterity (gesture) and decision-making (perception) (58), extraction of multidimensional kinematic metrics related to instrument movement (e.g., speed, path length, jerkiness, and directionality) provides a means to objectively assess how innovations alter the surgeon–robot interaction (7,59). This in turn can be predictive for the surgeon–patient interaction. Recently, such strategies have been successfully exploited to quantify how cues based on pharmaceutical and radiopharmaceutical signal intensities and signal-to-background ratios impact the surgical decision-making (7,60).
AUTOMATION
In industry, the term robotics goes together with the term automation. Nevertheless, today’s teleoperative surgical systems are classified as having no to low (tremor filtering) autonomy (level of autonomy, 0–1), meaning that the motions along the robotic links and joints remain fully controlled by the operating surgeon. The surgeon is also in charge of procedural planning and adaptation to changes in the environment that occur during the intervention. This is so even while the growing shortage in skilled surgical personnel, the ever-increasing procedural complexity, and rising health care costs provide a powerful incentive to move decision-making away from human supervisors (61). Endowing robots with full autonomy (level of autonomy, 5) thereby promises to democratize surgery, help make surgical quality ubiquitous, standardize outcomes, and reduce recurrences.
Beyond health care, perhaps the best-known example of using supervised autonomy in a dynamic environment is adaptive cruise control in a car (level of autonomy, 1). A clinical situation in which specific surgical subtasks are outsourced to the robot is the use of the Robodoc (Integrated Surgical Systems) (62) or AquaBeam (PROCEPT BioRobotics Corp.) (61) systems. For cars to advance to a higher level of autonomy, they require an exceptional level of sensory enrichment coupled with artificial intelligence–advanced data computing (Fig. 5 (63)). Subsequently, active interaction between the components of data-acquisition, processing, and automated perception assessments (i.e., decision-making) allow vehicles to cope with environmental variations. Translation of these concepts to a surgical robot demands a more intelligent interaction between the robot and the surgical environment (Fig. 5 (64)), something that can be facilitated by pharmaceuticals and molecular imaging or sensing technologies. Considering how easy it is for surgeons to overlook tumor fragments during surgery, control strategies that raise the diagnostic accuracy provide an obvious starting point when exploring surgical automation (65,66). Over time, such efforts will help, among others, to transfer the above-mentioned freehand IGR technologies into hands-free technologies that empower surgeons in their perception.
The rise of autonomous vehicles poses obvious dilemmas with regard to liability and ethics (67). These topics are being critically examined by today’s lawmakers, starting with regulations concerning the use of artificial intelligence. As the act of surgery is fault-intolerant, emphasis should be put on addressing these dilemmas before robots are entrusted to reliably identify, and quickly react to, unpredictable clinical situations (61).
CONCLUSION
The rise of IGR offers the field of IMI unique (out-of-the-box) growth capabilities—not only in the traditional terms of pharmaceuticals and radiopharmaceuticals, engineering, physics, and expanding of clinical indications but also in terms of embracing up-and-coming digital, performance-guided, and autonomous-surgery paradigms. Exploration of these opportunities will likely help expand the impact that nuclear medicine and molecular imaging have on the future of patient care.
DISCLOSURE
This research was financially supported by a Netherlands Organization for Scientific Research TTW-VICI grant (grant TTW 16141) and a KIC grant (grant 01460928). No other potential conflict of interest relevant to this article was reported.
Footnotes
Published online Jul. 11, 2024.
- © 2024 by the Society of Nuclear Medicine and Molecular Imaging.
REFERENCES
- Received for publication March 28, 2024.
- Accepted for publication June 5, 2024.