121 35 3MB
English Pages [14] Year 2023
Journal of Medical Robotics Research, Vol. 8, Nos. 1 & 2 (2023) 2340001 (14 pages) # .c World Scientific Publishing Company DOI: 10.1142/S2424905X23400019
A Surgical Robotic Framework for Safe and Autonomous Data-Driven Learning and Manipulation of an Unknown Deformable Tissue with an Integrated Critical Space Braden P. Murphy* and Farshid Alambeigi† Walker Department of Mechanical Engineering University of Texas at Austin, Austin, TX 78712, USA
Aside from reliable robotic hardware and sensing technologies, to successfully transition from teleoperation to an autonomous and safe minimally invasive robotic surgery on unknown Deformable Tissues (U-DTs), various challenges need to be simultaneously considered and tackled to ensure safety and accuracy of the procedure. These challenges mainly include but are not limited to online modeling and reliable tracking of a U-DT with integrated critical tissues as well as development of reliable and fast control algorithms to enable safe, accurate, and autonomous surgical procedures. To collectively and simultaneously address these challenges and toward performing an autonomous and safe minimally invasive robotic surgery in a confined environment, in this paper, we present a surgical robotic framework with (i) real-time vision-based detection algorithm based on a Convolutional Neural Network (CNN) architecture that enables tracking the time-varying deformation of a critical tissue located within a U-DT and (ii) a complementary data-driven adaptive constrained optimization approach that learns deformation behavior of a U-DT while autonomously manipulating it within a time-varying constrained environment defined based on the output of the CNN detection algorithm. To thoroughly evaluate the performance of the proposed framework, we used the da Vinci Research Kit (dVRK) and performed various experiments on a custom-designed U-DT phantom with an arbitrary deformable vessel embedded within the phantom’s body (serving as the U-DT’s integrated critical space). Various experiments were conducted and analyzed to demonstrate the performance of the proposed framework and ensure robustness and safety while performing an autonomous surgical procedure. Keywords: Autonomous deformable tissue manipulation; autonomous critical tissue detection; adaptive constrained optimization.
1.
Introduction
The transition from telesurgery on various deformable tissues (DTs) to a semi/autonomous, safe, and intelligent Minimally Invasive Robotic Surgery (MIRS) is becoming possible due to the advancement of robotic technologies and, particularly, the development of various model-based data-driven learning and computer vision algorithms [1–3]. Received 5 September 2022; Revised 27 December 2022; Accepted 27 December 2022; Published 18 March 2023. Published in JMRR Special Issue on International Symposium on Medical Robotics (ISMR 2022). Guest Editor: Iulian Ioan Iordachita. Email Addresses: *[email protected], †farshid.alambeigi@austin. utexas.edu † Corresponding author. NOTICE: Prior to using any material contained in this paper, the users are advised to consult with the individual paper author(s) regarding the material contained in this paper, including but not limited to, their specific design(s) and recommendation(s)
Nevertheless, aside from reliable robotic hardware and sensing technologies, to successfully transition from teleoperation to an autonomous and safe MIRS on DTs, various challenges need to be simultaneously considered and tackled to ensure safety and accuracy of the procedure [1,3]. These challenges mainly include but are not limited to (i) online modeling and reliable tracking of an Unknown DT (U-DT) deformation during the manipulation procedure, (ii) online detection and tracking of a no-fly-zone or critical region(s) (e.g. vessels, nerves, and tumors) within the U-DT where its shape and geometry might also deform during the surgical procedure, and (iii) development of reliable and fast control algorithms to enable a safe, accurate, and autonomous manipulation of an unknown DT based on the learnt and observed deformation of the tissue and the critical region. To address the aforementioned challenges associated with online deformation modeling, tracking, and control
2340001-1
B. P. Murphy & F. Alambeigi
of a generic deformable object, various model-based and data-driven techniques have been proposed in the literature [4–6]. For example, to control one-dimensional (e.g. rope, wire, and a surgical thread) [7–9], two-dimensional (e.g. fabric and sheet) [10,11], and three-dimensional deformable objects various techniques such as the visual servoing [12,13], diminishing rigidity concept [11], latent dynamics learning [14], learning-based model predictive control algorithms [15–17], and more recently positionbased dynamics [18] have been introduced in the literature. More details about these techniques can be found in works by Yin et al. [4] and Zhu et al. [6]. Similar to the aforementioned techniques for manipulation of a generic deformable object and toward an autonomous MIRS on a U-DT, literature also documents researchers’ efforts in addressing the challenges associated with the tissue deformation modeling and control in an autonomous surgical procedure. For example, Shademan et al. [19] used a pure sensor-based approach to perform a supervised autonomous surgery using the STAR system and avoided the mathematical modeling of the U-DT. Instead, to track deformation of the DT in realtime and ensure safety of the procedure, authors solely relied on a 3D visual tracking system and near-infrared fluorescent (NIRF) markers attached on the surface of the U-DT. On the other hand, many researchers have also developed different types of model-based and modelindependent techniques to understand deformation of homogeneous and heterogeneous U-DTs. For example, a Finite Element (FE) modeling approach has been implemented in the SOFA software [20] to simulate needle insertion in a DT [21]. Alambeigi et al. [5,22–24] also introduced a vision-based online data driven technique to simultaneously learn the deformation of a U-DT with homogeneous and heterogeneous properties and then manipulate it to perform a predefined task in an
unconstrained environment. Later, Retana et al. [25] extended this work to perform simultaneous learning and manipulation of a U-DT with homogeneous properties in a constrained environment. Vision-based deep reinforcement learning has also been proposed by Thananjeyan et al. [26] and Nguyen et al. [27] for autonomous U-DT debridement and manipulation. However, these works have been evaluated on homogeneous DTs and via experimental conditions that do not replicate a realistic surgical setting. Pedram et al. [28] also proposed a reinforcement learning approach to indirectly manipulate a U-DT in a simulation environment. As an alternative to the works mentioned above, for autonomous deformation tracking of a U-DT, a 3D visual perception algorithm has been introduced by Lu et al. [29] This algorithm, however, has not been evaluated in a realistic constrained and bounded autonomous U-DT manipulation task and may not ensure a safe and reliable autonomy. More recently, based on the position-based dynamics technique, a real-to-sim registration approach has been proposed to capture deformation of a U-DT during manipulation with a surgical instrument [30]. Nevertheless, similar to the above-mentioned literature, this study solely focuses on real-to-sim registration and does not utilize this registration for simultaneous deformation learning and manipulation of a U-DT. Overall, a review of the literature indicates that the proposed approaches (i) often rely on simplified assumptions during the U-DT modeling procedure including homogeneous mechanical properties and a 2D geometry for the U-DT; (ii) consider a MIRS in an unconstrained and free workspace during manipulation of the U-DT, and (iii) do not consider the presence of a critical tissue and its potential deformation during the manipulation. Nevertheless, in a generic MIRS (e.g. tumor dissection and tissue debridement) and depending on
Fig. 1. The surgeon and endoscopic views of a U-DT phantom with an embedded critical tissue (i.e. a simulated vessel) indirectly being manipulated using the dVRK. The endoscopic view shows the detected CS and SS (indicated as a transparent green overlay) using our proposed CNN-based autonomous detection algorithm. Figure also illustrates a sewing pin inserted into the U-DT to use its head as the feature point (marked in red dot in the image plane) and a target point indicated with a blue dot in the endoscope view. 2340001-2
A Surgical Robotic Framework for Safe and Autonomous Data-Driven Learning and Manipulation of an Unknown Deformable Tissue
the surgical site, the U-DT may encompass critical tissues that may experience a large deformation during manipulation in a constrained environment; and (iv) solely focused on either of the modeling, tracking, and/or control problems and have not tried to simultaneously address these problems together. As our main contributions and to collectively address the above-mentioned limitations toward performing an autonomous and safe MIRS in a confined environment, to our knowledge, this paper is the first to present a complete robotic framework that simultaneously enables (i) learning the deformation model of a U-DT with heterogeneous mechanical properties on-the-fly using a modelindependent data-driven approach, (ii) fast and reliable identification of the Critical Space (CS) within the U-DT using a pretrained Convolutional Neural Network (CNN) detection algorithm, and (iii) safe manipulation of the UDT within a time-variant and variable-size Safe Space (SS) using an adaptive constrained optimization framework. To thoroughly evaluate the performance of the proposed framework, we used the dVRK and performed various experiments on a custom-designed U-DT phantom with an arbitrary deformable vessel embedded within the phantom’s body (serving as the U-DT’s CS). Various experiments were conducted to demonstrate the performance of the proposed framework and analyzed using different evaluation metrics.
2. 2.1.
Fig. 2. The experimental setup used to evaluate the performance of the proposed adaptive constrained optimization framework for autonomous and safe manipulation of a U-DT. In this figure, curved arrows represent the known or unknown mappings between the components of this robotic system formulated by the corresponding Jacobian matrices. The straight arrows represent a position vector with respect to the corresponding coordinate frames.
during the ablation procedure. For this specific example, the CS can be considered as unsafe regions and vessels located on the U-DT that we always want to ensure the probes do not perforate during the manipulation procedure and endanger patient’s safety. To address this problem, the following assumptions/remarks have been made.
Methods Problem statement and assumptions
As shown in Fig. 2, let’s consider a U-DT with an embedded CS that is being manipulated with a robotic arm while a camera/endoscope is recording the tissue deformation during the procedure. The goal is to indirectly and autonomously manipulate a particular feature point — shown by the red dot in the figure — on the U-DT’s surface (p 2 R 3 ) to align with a predefined desired point in the image space (i 2 R 2 ) — shown by the blue dot in the figure — without having a-priori knowledge about the deformation behavior of the U-DT. Throughout the whole manipulation task, the CS needs to always be detected and the feature point must be kept within the SS, indicated as a transparent green overlay in the figure. A good motivating example of such case could be autonomous cryoablation of kidney tumors in which ablation probes need to autonomously and safely be inserted within a tumor inside a U-DT. As Alambeigi et al. [22] have shown in their study, to perform this task, one can safely and robustly manipulate the unknown tissue and particularly insertion locations on the tissue to the already known desired insertion locations of the probes in the workspace and actively manipulate the insertion locations on the U-DT to ensure an easy and safe insertion
Assumption 2.1. The robot’s end effector has already grasped a U-DT at the start of the manipulation and the gripper will not disengage throughout the task. Assumption 2.2. The CS may contain a tissue that has different mechanical properties than the U-DT, therefore, creating heterogeneous mechanical properties for the U-DT. Assumption 2.3. Noisy visual feedback from a fixed camera is readily available through the manipulation procedure and we can visually measure the tissue deformation to track the feature point and the critical tissue during the manipulation. We also assume the camera has not been calibrated and the mapping between the image space and the task space in unknown. Assumption 2.4. We assume the original shape of the tissue inside the CS is known a-priori. However, the original shape of this tissue can vary and deform during the manipulation procedure. Assumption 2.5. As shown in Fig. 2, in the proposed framework, the image space is divided into the SS (shown by the transparent green bounding box) and the
2340001-3
B. P. Murphy & F. Alambeigi
CS defined by a bounding box around the critical tissue. SS defines a constrained allowable region for manipulation of the feature point (shown by the red dot in the figure) through the autonomous manipulation. Of note, size of these spaces due to the deformation of the U-DT and critical tissue is not constant and may vary during the manipulation of the U-DT. To ensure safety of procedure, these spaces will be detected in real-time using a detection algorithm. Assumption 2.6. In this paper, as shown in Fig. 1, manipulation in constrained environments refers to the cases in which a critical tissue exists within the U-DT and the feature point is to be constrained to stay outside a region in which this critical tissue resides. Conversely, in an unconstrained manipulation, there is no critical tissue and the robot can freely manipulate the feature point within the workspace. 2.2.
Mathematical formulation of the proposed control framework
To develop the proposed adaptive constrained optimization framework for autonomous and safe manipulation of a U-DT, we follow our proposed formulation by Alambeigi et al. [5,22] and Retana et al. [25] Particularly, we extend the proposed unconstrained formulation by Alambeigi et al. [5] to simultaneously learn the deformation behavior of a U-DT on-the-fly while indirectly and safely manipulating it inside a time-varying constrained environment — defined by SS in Fig. 2. In this regard, we first model the kinematics of the robot shown in Fig. 2 and grasping the U-DT as follows: ΔrðtÞ ¼ Jr ðqðtÞÞ ΔqðtÞ;
ð1Þ
where Jr ðqðtÞÞ 2 R 3n is the Jacobian of the robot that maps the change in the joint space motion ΔqðtÞ 2 R n to the task space displacement of the robot ΔrðtÞ 2 R 3 in a small time interval Δ t. In our formulation and in the setup illustrated in Fig. 2, we assume the camera has not been calibrated and the mapping between the image space and the task space can be modeled using an unknown, time-varying, and linear Jacobian matrix JI 2 R 23 . Considering this and an unknown, time-varying, and linear deformation model for the U-DT (i.e. JDT 2 R 33 ), at each time instant t, we may map the relation between the change in the displacement of the grasping point in the Cartesian space Δ r 2 R 3 to displacements of the feature point in the image space Δ i ¼ ½ Δx; Δy 2 R 2 using an unknown combined Jacobian matrix Jc 2 R 23 ¼ JI JDT . Therefore, we can write ΔiðtÞ Jc ðrðtÞÞ Δ rðtÞ: More information about the derivation of these Jacobians can be found in Alambeigi et al. [5]
To iteratively learn and estimate the unknown combined Jacobian matrix Jc during manipulation, we use the first-rank Broyden update rule [31]: Jbc ðt þ Δ tÞ ¼ Jbc ðtÞ þ
ΔiðtÞ Jbc ðtÞΔ rðtÞ ðΔ rðtÞÞ | ; ð2Þ ðΔ rðtÞÞ | ΔrðtÞ
where 0 1 is a constant parameter, controlling the rate of change of Jbc . Of note, this numerical estimation approach calculates a Jacobian matrix at iteration k that results in the smallest possible change in its Frobenius k k1 matrix norm kJb C Jb C kF with respect to its previous estimate made at iteration k 1. This critical feature ensures a smooth motion during the tissue manipulation task [5]. Next, to iteratively and simultaneously use the estimated combined Jacobian matrix Jbc and manipulate a UDT with an embedded shape varying critical tissue, we propose the following adaptive constrained optimization framework. In this formulation, we define an objective function that minimizes the distance between the feature point (i.e. sewing pin’s head) location iðtÞ and the desired goal location id at each time t (i.e. Δ ´ðtÞ ¼ id iðtÞ) while ensuring a minimum robotic motion in the joint space: 2 b argmin Jc ðrðtÞÞ Jr ðqðtÞÞ Δ qðtÞ Δ ´ðtÞ þ !k ΔqðtÞk 22 |fflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflffl} Δ qðtÞ Δ rðtÞ s:t:
CðqðtÞÞ ΔqðtÞ dðqðtÞÞ;
2
ð3Þ where ! is defined as a positive nonzero weighting constant. Matrix C 2 R Nc n and vector d 2 R Nc define Nc linear time-varying inequality constraints that represent the variable geometry of the SS bounding box (shown in Figs. 2 and 3) for manipulation of the feature point. 2.3.
Definition of bounding boxes representing the CS and SS
The main difference in this work compared with the previous literature [5,25] is that in the current problem statement and formulation matrix CðtÞ and vector dðtÞ are both time-variant and depend on the shape of the critical tissue inside the U-DT and the detected bounding box around the CS (shown in Fig. 3). As mentioned in Assumption 2.5, in this study, we assume that the shape of the critical tissue may vary during the manipulation and this deformation needs to be obtained in real-time using a computer vision algorithm throughout the manipulation procedure. Of note, details of this CNN detection algorithm will be discussed in Sec. 3.4. Using the detected CS, we can then calculate the bounding box representing the SS using the [Xmin , Xmax ] and [Ymin , Ymax ] variables (illustrated in Fig. 3) and
2340001-4
A Surgical Robotic Framework for Safe and Autonomous Data-Driven Learning and Manipulation of an Unknown Deformable Tissue
are time-variant and need to be updated at each iteration as the U-DT is manipulated.
3.
Robotic System Architecture
To evaluate the performance of the proposed adaptive constrained optimization framework for autonomous and safe manipulation of a U-DT, we developed an experimental setup and software architecture including the dVRK as our robotic platform, a custom-design U-DT with an embedded deformable critical tissue, and a software architecture enabling real-time detection of the CS and the feature point in the image plane. The following sections describe the components of our experimental setup and software architecture. 3.1. Fig. 3. Geometrical illustration of the bounding boxes representing the CS and SS. This information will be used to define the constraints of the proposed adaptive constrained optimization framework using the top and bottom right corners.
subsequently define the time-varying constraints in (3) to ensure safety throughout the autonomous manipulation procedure. To ensure feature point zðtÞ is always within the SS bounding box during the manipulation of U-DT, the following inequality constraints need to always be satisfied: Xmin ðtÞ xðtÞ ð^J c Jr Þ1;: Δ q Xmax ðtÞ xðtÞ; Ymin ðtÞ yðtÞ ð^J c Jr Þ2;: Δ q Ymax ðtÞ yðtÞ; where ð. . .Þ1;: and ð. . .Þ2;: represent the first and second row of the ^J c Jr 2 R 2n matrix, respectively. These equations can then be written in the matrix form using matrix CðtÞ and vector dðtÞ to be used in (3) as follows: CðtÞ ¼ ½^J c Jr ; ^J c Jr ; dðtÞ ¼ ½Ymin þ y; Xmin ðtÞ þ x; Ymax y; Xmax x; ð4Þ where CðtÞ 2 R 4n and dðtÞ 2 R 4 . Of note, in this formulation, definition of Xmin ðtÞ is flexible as it can be defined at the origin of the X-axis in the image plane or we can limit it to another value depending on the application. Xmax ðtÞ is the x coordinate of the CNN Bounding Box Top or Bottom Corner. Similarly, Ymin ðtÞ is defined by the y coordinate of the CNN Bounding Box Bottom Corner and Ymax ðtÞ represents the y coordinate of the CNN Bounding Box Top Corner. These parameters have been illustrated in Fig. 3. Of note, the location of the bounding box, current location of feature point and the boundary between the SS and CS
Robotic platform
As illustrated in Fig. 2, to conduct our experiments, we used the dVRK robotic framework and its open source electronics and software (cisst/SAW libraries) developed by the researchers at Johns Hopkins University [32]. For the experiments, we used one patient side manipulator (PSM) and integrated an EndoWrist r ProGrasp™ (Intuitive surgical, Inc., California, USA) instrument with the PSM. The used instrument has two 16 mm fingers that ensure a firm grasp of the fabricated U-DT phantom through the procedure. We also used the stereo camera installed on one Endoscopic Camera Manipulator (ECM) to provide an endoscopic view of the U-DT through the procedure. 3.2.
U-DT phantom with embedded critical tissue
A critical element of the detection algorithm and the proposed framework is the custom-designed U-DT phantom with an embedded critical tissue. To develop such a phantom shown in Fig. 3, we first simulated a deformable arbitrary human blood vessel structure (as our critical tissue) by using pieces of soft rubber silicone tubing (McMaster-Carr, 51845K52) with 1 mm internal diameter and 2.2 mm outer diameter and gluing them together to create the vein branches (Fig. 4). Next, we applied three coats of blue spray paint (Krylon) and painted the fabricated vessel to look more realistic and assist the CV detection algorithm. Of note, the shape and number of these critical tissues can readily be modified. To fabricate the U-DT phantom, we first designed a rectangular 3D printed PLA mold with internal dimensions of 95 95 9 mm (shown in Fig. 4) and prepared a silicone mixture made of 1:1 ratio of two part Smooth-On Ecoflex™ 00-35 FAST Platinum Cure Silicone (Smooth-On Inc.) and degassed it inside a vacuum chamber to create
2340001-5
B. P. Murphy & F. Alambeigi
Fig. 4. The fabricated mold (right) and U-DT phantom (left) with an embedded critical tissue (i.e. vein structure).
a uniform mixture. It is worth mentioning that the 9 mm thickness for the mold was chosen based on the maximum opening of the utilized ProGrasp forceps and to ensure a firm grasp of the fabricated U-DT phantom. The other dimensions of the mold were chosen arbitrarily and can readily be modified. Next, to make the U-DT with the embedded vessel structure, we performed a two-step molding procedure. In the first step, we poured the degassed silicon mixture inside the mold and made a UDT with 5 mm thickness. Next, we put the fabricated vessel structure on the top surface of this cured silicone phantom and poured the rest of silicon on the mold to embed the simulated critical tissue (i.e. the vein phantom) inside the U-DT phantom. Finally, we let the silicon cure for 30 min before removing the phantom from the mold. Of note, the selected 9 mm thickness for the phantom and the two-step molding procedure ensured that the fabricated blue vein is completely integrated with the U-DT and can deform continuously
together with the silicon structure. Figure 4 shows the fabricated U-DT phantom with the embedded blue vein structure.
3.3.
Software architecture
Figure 5 illustrates the components of the software architecture developed for performing an autonomous and safe data-driven manipulation of the developed U-DT phantom using the dVRK system and the described adaptive constrained optimization framework in Sec. 2.2. As shown, this architecture consists of two main frameworks responsible for (i) real-time CS and feature point detection using the visual feedback provided by the endoscope (i.e. the Endoscope Visual Framework) and (ii) online data-driven deformation learning and manipulation of the U-DT phantom (i.e. the Deformable Tissue Control Framework).
Fig. 5. The overall software architecture developed for performing an autonomous and safe data-driven manipulation of the developed U-DT phantom using the dVRK system and the described adaptive constrained optimization framework in Sec. 2.2. As shown, this architecture consists of two main frameworks responsible for (i) real-time CS and feature point detection using the visual feedback provided by the endoscope (i.e. the Endoscope Visual Framework) and (ii) online data-driven deformation learning and manipulation of the U-DT phantom (i.e. the Deformable Tissue Control Framework). 2340001-6
A Surgical Robotic Framework for Safe and Autonomous Data-Driven Learning and Manipulation of an Unknown Deformable Tissue
Creating an accurate and reliable CS detector was one of the critical challenges of this study. Not only it was difficult to be generic enough to detect the embedded vessel at different endoscope and U-DT phantom initial configurations (i.e. mounting location and orientation), but depth of view (i.e. distance of the U-DT to the endoscope lens), lighting condition, light reflection, occlusion, and phantom orientation could all also vary randomly during the manipulation and, therefore, directly affect the performance of the CS detection and the proposed
framework. During the manipulation, the U-DT and embedded critical tissue are both dynamically deforming and their deformation behavior may be different through the procedure making the reliable CS detection even more challenging. Therefore, to effectively train a CNN for critical tissue detection, we needed a diverse and sufficient amount of custom made data set to cover all of the mentioned conditions. As there were no open-source data sets available for our custom-made deformable U-DT phantom to enable autonomous and reliable CS detection, we simulated 11 different experimental conditions that may occur during a U-DT manipulation and collected the required data set for CNN training. Of note, in these experiments, we ensured both U-DT and embedded vessel are experiencing a large and continuous deformation in various deformation modes, shown in Fig. 6. To simulate these 11 scenarios, as shown in Fig. 2, the U-DT was first fixated from one end and grasped by the EndoWrist r ProGrasp™ from another corner. Next, to create a consistent data collection approach, the PSM was controlled using a custom-written MATLAB script to pull the phantom from its original shape for maximum 50 mm in the x-direction of the robot’s task space with 1 mm displacement at each sequence, returning to the original position, and then repeating a similar motion in the yand z-directions. Of note, we chose 50 mm as the maximum stretch as it created a large deformation for the U-DT and the embedded blue vessel without disengaging the forceps from the U-DT. Each manipulation procedure was recorded by the endoscope and the Kazam screen recorderc for collecting the training data set. As shown in Fig. 6, we used this motion sequence for manipulating the phantom in 11 scenarios including five different lighting conditions, mounting orientation for the U-DT, four lighting interference (e.g. different glare conditions), partial occlusion of the blue vessel, and four different distances between the U-DT and endoscope lens locations. In total, we collected 110 images for training of the MobileNetV2 CNN. After performing these experiments, an additional MATLAB script was used to convert the recorded video frames into images with the proper randomized Universally Unique Identifier (UUID) to enable CNN training within the Tensorflow libraries. Once the data set for training images was collected, labels were created using labelImgd by taking the image and outputting an XML file with the same UUID file and the label name (e.g. blue vein). Similar to the exemplary images shown in Fig. 6(c), this approach allowed us to take custom images and turn them into custom labels. Next, as can be seen in Fig. 6(d), we marked the whole embedded blue vein using a rectangular bounding box as the target CS to be detected.
a https://github.com/jhu-dvrk/dvrk-ros/blob/master/dvrk robot/video.md. b https://www.tensorflow.org/lite/examples/object detection/overview.
c https://github.com/hzbd/kazam. d https://github.com/heartexlabs/labelImg.
Particularly, as shown in Fig. 5, the dVRK endoscope is the core of the processes performed in the Endoscope Visual Framework by producing an S-video analog signal output that can be grabbed and displayed via a USB video frame grabber (Hauppage Live 2) and ROS packages (i.e. Gstreamer and gscam).a The grabbed stereo video frames are then used in the Endoscope Visual Framework and particularly Tensorflowb and OpenCV libraries to identify the CS (i.e. the embedded vessel inside the U-DT) and track the feature point (i.e. red pin) shown in Fig. 2, respectively. This information then communicated to the frequency of 10 Hz through ROS nodes and MATLAB-ROS bridge [33] to the Deformable Tissue Control Framework to be used for U-DT deformation learning and the decision/commanding phase for deformable tissue manipulation using the described adaptive constrained optimization framework in Sec. 2.2. To solve (3), we used the MATLAB lsqlin function, as an active set method used for solving constrained linear least-squares problems. 3.4.
Endoscope visual framework
As shown in Fig. 5, in the Endoscope Visual Framework, we used the object detection library in Tensorflow to train a CNN that throughout the manipulation task and in real-time can detect the CS in the video grabbed from the endoscope. When the critical tissue is detected, a rectangular box is then bounded around the tissue to represent the CS (Figs. 2 and 3). Next, this bounding box will be used within another computer vision algorithm to identify its corners within the image plane and actively defining the SS region using the constraints formulated in the proposed adaptive constrained optimization framework. In addition, simultaneously, we will use another computer vision algorithm to track in real-time the location of feature point in the image plane for the U-DT deformation learning and manipulation purposes. The following sections discuss in detail the developed algorithms to perform these tasks in real-time. 3.4.1.
Data collection and labeling for the CNN architecture
2340001-7
B. P. Murphy & F. Alambeigi
Fig. 6. Figure illustrates the steps performed for data collection and image labeling before training the CNN-based CS detection algorithm. As shown, we used our experimental setup and simulated different U-DT manipulation scenarios to collect a sufficient data set for train the CNN.
3.4.2.
CNN architecture for autonomous CS detection
For real-time CS detection, multiple well-established neural network architectures exist which could have been used as the backbone for CS boundary detection within our software architecture. Inception-v3 [34] and Resnet [35] are example networks that could be used as the basis for object detection computer vision algorithms. For this study, we selected MobileNetV2 [36] as a CNN architecture that can deliver high accuracy while keeping parameters and mathematical operations as low as possible. For real-time CS detection, we used the SSD MobileNet V2 FPNLite 320 320 architecture that contains the initial fully convolution layer with 32 filters, followed by 19 residual bottleneck layers [36]. The inverted residual bottleneck is particularly useful as it enables the optimization of memory in resourceconstrained architecture. Therefore, despite having a dedicated desktop machine to handle the shown software architecture in Fig. 5 — including the endoscope video feed, CNN object detection and computer vision feature detection algorithms, U-DT deformation learning and dVRK control–RAM and CPU were pushed when the whole framework is running in real-time. Therefore, a light CNN architecture such as MobileNetV2 was the perfect choice for our purpose. To implement the autonomous CS detection module using the prepared labeled images (shown in Fig. 6), Tensorflow librariese e https://github.com/nicknochnack/TFODCourse.
were used to train and test our CNN offline using the MobileNetV2 architecture and then deploy it for realtime detection. Figure 7 shows exemplary snapshots demonstrating the performance of the MobileNetV2 CNN architecture in real-time detection of the critical tissue (i.e. vessel) embedded within the U-DT phantom while the robot is manipulating the tissue. Of note, we solely used the top and bottom right corners of the detected bounding box to define the boundary of CS and SS. More details will be described in the following section.
3.5.
CS boundaries definition and feature point tracking computer vision algorithms
We used Shi and Tomasi [37] and Lucas–Kanade [38] OpenCV algorithms to enable real-time CS and SS boundaries definition and tracking of the feature point located on the surface of the U-DT phantom, respectively (shown in Fig. 3). Specifically, the Shi–Tomasi algorithm [37] explored the images provided by the endoscope (e.g. Fig. 3) containing the bounding box around the embedded blue vessel (shown in Fig. 7) detected using the MobileNetV2 detection algorithm, and then automatically returned four corners of the detected box. As shown in Fig. 5, coordinates of these detected corners in the image were then transferred in real-time to the Deformable Tissue Control Framework to define the boundaries of the SS using the [Xmin , Xmax ] and [Ymin , Ymax ] variables. As described in Sec. 2.2, using these vectors, we then could
2340001-8
A Surgical Robotic Framework for Safe and Autonomous Data-Driven Learning and Manipulation of an Unknown Deformable Tissue
Fig. 7. Exemplary snapshots demonstrating the performance of the MobileNetV2 CNN architecture in real-time detection of the critical tissue (i.e. vessel) embedded within the U-DT phantom while the robot is manipulating the tissue. Of note, we solely use the top and bottom right corners of the detected bounding box to define the boundary of CS and SS.
(a)
(b)
Fig. 8. Figure shows the traversed trajectory of the feature point toward the desired point both defined inside the SS for two different experimental conditions in (a) and (b). In the performed experiments, (a) and (b) compare the performance of the proposed adaptive constrained optimization framework with an unconstrained case, shown with cyan color. Figure also solely shows the initial and final locations of the CS right boundary for the Adaptive Pin Trajectory 1 case at the beginning and end of the U-DT manipulation as well as the trajectory of the top and bottom corners of this boundary throughout the experiments. In these figures, red and blue circles represent the top and bottom right corners of the detected bounding box in the beginning and end of the PBIP task. Of note, solely these important corners of the detected bounding box are used to define the boundary of CS and SS. Exemplary snapshots of the endoscope view and the overlayed CS and SS through the manipulation are also shown for the Adaptive Pin Trajectory 1 case.
calculate the time-varying matrix CðtÞ and vector dðtÞ — as the constraints of (3) — to ensure safety during the autonomous manipulation procedure. Of note, as shown in Figs. 8 and 9, in this study, we solely used the top and bottom right corners of the detected bounding box to define the boundary of CS and SS. We used OpenCV to define and place an arbitrary desired target point (shown by blue color in Fig. 3) at the beginning of the experiment. Also, to reliably track the feature point (i.e. the pin shown in Figs. 1 and 3) located on the surface of the U-DT phantom, we first painted a 1.5 mm diameter sewing pin with light silver color. Next, based on our previous works [5,22,23] in successful implementation of the Lucas–Kanade algorithm [38], during the experiment and as the U-DT phantom was manipulated, we used the Optical Flow package and developed a visual tracking algorithm in Cþþ based on the Lucas–Kanade algorithm [38] to track the location of the pin in real-time. The obtained location of the feature point was then filtered using the following first-order low-pass filter before sending that to the Deformable Tissue Manipulation framework: :
f ¼ ¤ðf Þ;
where and f represent the original and filtered signals, respectively. The diagonal matrix ¤ ¼ diagð1 ; 2 ; . . . ; k Þ allows us to adjust the dissipative properties of the filtered signal. 3.6.
Deformable tissue control framework
As shown in Fig. 5, in the Deformable Tissue Control Framework, based on the transferred information from the Endoscope Visual Framework (i.e. filtered location of the feature point and location of the SS in the image plane) in each time instant k, first, combined Jacobian matrix JbC is estimated using (2), and matrix CðtÞ and vector dðtÞ are also calculated using (4). Next, the control inputs Δq are calculated from (3) and the robotic arm is moved based on this motion command. In the next iteration, the combined Jacobian matrix is updated again using the actual measured displacement of the feature point on the U-DT in the image plane (i.e. Δi) and displacements of the grasping points (i.e. Δ r) that has caused these changes. This algorithm iterates while a predefined error threshold is satisfied. As mentioned, throughout the procedure, safety of the manipulation is
2340001-9
B. P. Murphy & F. Alambeigi
Fig. 9. Figure shows the traversed trajectory of the feature point toward the desired point defined inside the CS and compares the performance of the proposed adaptive constrained optimization framework with an unconstrained case, shown with cyan color. Figure also solely shows the initial and final locations of the CS right boundary for the Adaptive Pin Trajectory 1 case at the beginning and end of the U-DT manipulation as well as the trajectory of the top and bottom corners of this boundary throughout the experiments. In this figure, red and blue circles represent the top and bottom right corners of the detected bounding box in the beginning and end of the PBIP task. Of note, solely these important corners of the detected bounding box are used to define the boundary of CS and SS. Exemplary snapshots of the endoscope view and the overlayed CS and SS through the manipulation are also shown for the Adaptive Pin Trajectory 1 case.
ensured using the CS detection algorithm and constraints of the optimization framework.
4.
Evaluation Experiments and Results
To demonstrate the performance of the proposed adaptive constrained optimization framework for autonomous and safe manipulation of the fabricated U-DT phantom with embedded vessel using the dVRK and the described software architecture, we evaluated our algorithm by performing two different Point-Based Indirect Positioning (PBIP) [13] experiments. The PBIP is the critical component of various autonomous or semi-autonomous surgical robotic interventions, as seen in works by Zhang et al. [39] and Alambeigi et al. [22] in which often a U-DT with heterogeneous mechanical properties — such as liver, kidney, or breast — needs to be appropriately manipulated to a specific location for performing a secondary tasks such as tissue debridement or dissection. In this task, we control the PSM movements to align the feature point located on the surface of the phantom with the desired point in the image space — i.e. red and blue points in Fig. 3 — respectively, while ensuring that the feature point does not enter or collide with the CS boundary. In other words, throughout the whole manipulation task, the CS needs to always be detected and the feature point must be indirectly manipulated and kept within the SS. Of note, due to the deformable nature of the fabricated U-DT phantom with heterogeneous mechanical properties, the deformation behavior of the U-DT and the embedded vessel is
different and continuously may change during the manipulation, demanding the algorithm to always adapt and learn their deformation behavior on-the-fly. To evaluate the impact and performance of the adaptive constraints of (3) — defined by time-varying matrix CðtÞ and vector dðtÞ — in a PBIP task and in order to quantify the performance of the developed autonomous CNN for CS detection, we performed two different types of Adaptive Constraint PBIP (AC-PBIP) experiments. In these experiments, we intentionally defined the location of the desired point inside the SS or CS while the feature point initialized within the SS. To demonstrate performance of the algorithm in these two different cases and in order to show robustness of algorithm to the initialization of the feature point, we initialized the algorithm from three different locations and repeated each experiment twice. Of note, changing the initialization point drastically affects the deformation of the U-DT and performance of the deformation learning algorithm [5,22]. In these experiments, the manipulation was continuing unless the feature point collides with the moving boundary of the detected CS. In the performed experiments, the autonomous CS detection algorithm was concurrently running to track the location of embedded vessel and update the constraints to ensure safety throughout the autonomous manipulation procedure. In the conducted experiments, we compared the performance of the proposed adaptive constrained optimization framework with a similar constraint-free framework and used the following metrics to thoroughly analyze the obtained results of these cases. First, we compared the traversed trajectory of the feature point in
2340001-10
A Surgical Robotic Framework for Safe and Autonomous Data-Driven Learning and Manipulation of an Unknown Deformable Tissue
the image plane from its initial location to the end of experiments. The traversed trajectory of the feature point clearly indicates the impact of adaptive constraints during a PBIB task on the fabricated phantoms. Second, to study the convergence of the proposed framework, as the algorithm iterates, we calculated the Δ´ðtÞ ¼ id iðtÞ in pixel as the Euclidean distance between the desired target point and the current position of the feature point in the image plane. Third, to study the performance of the Broyden update rule in online learning and estimation of the combined Jacobian matrix JbC , we implemented Yoshikawa’s manipulability measure (YMM) [5,40]. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi | © ¼ detðJbC Jb C Þ; ð5Þ where detðÞ denotes determinant of a matrix. 4.1.
AC-PBIP experiment with the desired point located inside the SS
For this set of experiments, we arbitrarily initialized Jb C 2 R 23 with the matrix of all ones and after performing some preliminary experiments, we realized ¼ 0:7 resulted in the fastest convergence. We also set ¼ 1 pixel as the error threshold and stopping criteria for the algorithm. As the representative of the performed ACPBIP experiments, Fig. 8 shows an example of the conducted set of experiments with two different desired points and identical initial conditions defined inside the SS and compared the performance of the proposed adaptive constrained optimization framework with an unconstrained case, shown with cyan color. Of note, the constrained experiments were repeated twice and their corresponding traversed trajectories were plotted with green and red colors in Fig. 8. These figures also show the initial and final locations of the CS right boundary at the beginning and end of the U-DT manipulation as well as the trajectory of the top and bottom corners of this boundary throughout the experiments. Of note, we only plotted the right boundary of the CS because of the location of feature point with respect to this boundary and the fact that the left boundary does not affect on the performance of the algorithm. Further, Fig. 8 also presents exemplary snapshots of the endoscope view and the overlayed CS and SS through the manipulation for the Adaptive Pin Trajectory 1 case. Figure 10 also shows the Euclidean distance convergence plots and the YMM metrics through the manipulation procedure for these three experiments. 0
4.2.
AC-PBIP experiment with the desired point located inside the CS
For this set of experiments, we used identical experimental parameters as the one mentioned for the previous experimental trials. However, as opposed to the other
experiments performed inside the SS, we defined the desired point in a way to ensure it always stays within the CS throughout the manipulation procedure to specifically evaluate the performance of the CS detection algorithm and constraints of the optimization framework (Fig. 9). Similar to the previous case, we started the experiments with feature points starting at identical initial conditions inside the SS, repeated this experiment twice, and compared the traversed trajectories of the proposed adaptive constrained optimization framework with an unconstrained case. Exemplary snapshots of the endoscope view and the overlayed CS and SS through the manipulation for one of the performed AC-PBIP trials are also shown in Fig. 9. Additionally, Fig. 10 indicates the Euclidean distance convergence plots and the YMM metrics through the manipulation procedure for these three experiments. Of note, it is worth mentioning that in the shown YMM plots, the numerical values of this metric do not convey a useful information since this metric has been calculated based on an estimated Jacobian matrix. However, as we have shown in our previous works [5,22,23], the sudden changes in this metric represent useful information during manipulation of the U-DT such as a change in the direction of manipulation of a U-DT to converge to the desired goal or tissue deformation properties.
5.
Discussion
Figure 7 clearly shows the robust performance of the CS detection algorithm in reliable identification of the embedded vessel within the U-DT phantom during its manipulation. It is worth emphasizing that although the shown bounding boxes in Fig. 7 do not completely cover the CS, this does not affect the performance and reliability of the proposed framework. As can be seen in this figure, the algorithm can reliably and correctly always detect the right side of the CS that is actively used within the constraints of the optimization framework to ensure safety of procedure. In other words, the left corners of the bounding box are not used within the constraints and do not affect the performance of the proposed algorithm. Moreover, in this formulation, size of the detected bounding box does not affect the performance of the algorithm, as we only require the top and bottom right corners of the bounding box for our formulation to ensure safety. Similarly, investigation of Fig. 8 demonstrates that the autonomous CS detection algorithm could successfully track the embedded vessel and continuously identify the CS– and, particularly, the right border of the CS– during the manipulation procedure and under various deformation and lighting conditions. Continuity of the trajectories representing the top and bottom corners of the CS right boundary also proves reliable detection of the embedded vessel within the U-DT phantom.
2340001-11
B. P. Murphy & F. Alambeigi
Fig. 10. The left column plots provide in solid lines and dashed lines the complementary Euclidean distance convergence (top figure) and YMM (bottom figure) metrics for the experiments shown in Figs. 8(a) and 8(b), respectively. Similarly, these metrics have been calculated an shown in the right column for the experiments demonstrated in Fig. 9.
Similarly, convergence of the performed AC-PBIP experiments indicates the reliable and consistent estimation of the CS and its corners as it directly affects on the numerical values of the time-varying matrix CðtÞ and vector dðtÞ defining the constraints of (3). Of note, an inconsistent detection of CS corners may make solution of the optimization problem locally infeasible. Of note, the uploaded complementary video with this paper clearly demonstrates the performance of the autonomous CS detection algorithm. The traversed trajectories (Fig. 8) and convergence of the feature points (Fig. 10) in all of the performed experiments demonstrate the robust performance of the proposed data-driven learning and manipulation algorithm for both constrained and constraint-free PBIP tasks. Moreover, these experiments indicate independence of the proposed framework to the initialization parameters, initial feature point definitions, and heterogeneity in the U-DT mechanical properties that make the
data-driven online deformation learning of a U-DT very challenging [5]. Investigation of the Euclidean distance trajectories (Fig. 8) and convergence plots (Fig. 10) of the AC-PBIP experiment with the desired point defined inside the CS demonstrates almost similar convergence times and behaviors for all of the constrained and constraint-free experiments. Of note, this result indirectly shows the computational efficiency of the used MobileNetV2 CNN algorithm in not increasing the computational time and affecting the simultaneous CS detection, deformation learning, and manipulation of the U-DT phantom. Additionally, similar traversed trajectories and YMM behavior (Fig. 10) in these experiment, convey the correct definition of the constraints as they should not get activated when the feature point is located inside the SS. Similar YMM metrics also indicate the fact that despite the presence of internal disturbances due to the heterogeneity of U-DT mechanical properties, the proposed
2340001-12
A Surgical Robotic Framework for Safe and Autonomous Data-Driven Learning and Manipulation of an Unknown Deformable Tissue
framework can continuously adapt and learn the new deformation behavior and position the feature point on its predefined location [22]. On the other hand, the results shown in Fig. 9 clearly represent the correct and reliable performance of the CS detection framework in real-time detection of the moving CS boundary as well as the optimization framework in blocking the motion of the manipulator and terminating the procedure when the feature point is colliding with the CS border. The corresponding Euclidean distance convergence plots also show the capability of both Endoscope Visual Framework and the Deformable Tissue Control Framework in safely manipulating a feature point located on a surface of a U-DT without having a priori knowledge on its deformation behavior.
6.
Conclusion
In this work, toward ensuring safety in an autonomous surgical manipulation of a U-DT with an embedded critical tissue, we proposed an adaptive constrained optimization framework that enables simultaneous (i) real-time detection of SS and CS in a U-DT using a CNN architecture, (ii) online and data-driven deformation learning of a U-DT using a first-rank update rule, and (iii) safe and indirect manipulation of a U-DT using the learnt deformation behavior and detected SS for performing a safe surgical intervention. Utilizing three metrics, we successfully analyzed the efficacy of our optimizationbased framework on a novel custom-design U-DT phantoms with embedded CS designed to show dynamic heterogeneity in mechanical properties and shape. Results clearly demonstrated the phenomenal performance of the CNN-based detection algorithm in real-time identification of CS as well as the deformation learning and control algorithms to safely manipulate the U-DT phantom independent of the initialization parameters and the assigned desired target points. Although promising, the proposed work is limited to the 2D image plane to conduct a 3D manipulation of a U-DT. The future work will include extensions of the algorithm to be implemented on 3D manipulation of different U-DT phantoms with 3D geometry and ex-vivo animal tissues. Additionally, endoscope occlusion may pose limitations to the performance of our vision-based algorithm. To address this issue we may consider implementing and testing other real-time imaging modalities (e.g. a near-infrared fluorescent imaging system) [19]. We may also improve the performance of our CNN algorithm by collecting additional training images under different experimental conditions and trying other efficient object detection algorithms. Moreover, we may implement a CNN-based algorithm for improving detection and tracking of the considered feature point. Also, as an extension of the proposed work, we will consider
cases with multiple CSs within the U-DT and improve our detection and manipulation algorithms for these cases.
Acknowledgments This work is supported by The University of Texas at Austin internal funds. We also thank Mr. Manuel Retana for his help in the development of algorithms and the data collection procedure.
References 1. A. Attanasio, B. Scaglioni, E. De Momi, P. Fiorini and P. Valdastri, Autonomy in surgical robotics, Annu. Rev. Control Robot. Auton. Syst. 4 (2021) 651–679. 2. G.-Z. Yang et al., Medical robotics regulatory, ethical, and legal considerations for increasing levels of autonomy, Sci. Robot. 2(4) (2017) eaam8638. 3. F. Ficuciello, G. Tamburrini, A. Arezzo, L. Villani and B. Siciliano, Autonomy in surgical robots and its meaningful human control, Paladyn, J. Behav. Robot. 10(1) (2019) 30–43. 4. H. Yin, A. Varava and D. Kragic, Modeling, learning, perception, and control methods for deformable object manipulation, Sci. Robot. 6(54) (2021) eabd8803. 5. F. Alambeigi, Z. Wang, R. Hegeman, Y. H. Liu and M. Armand, A robust data-driven approach for online learning and manipulation of unmodeled 3d heterogeneous compliant objects, IEEE Robot. Autom. Lett. 3(4) (2018) 4140–4147. 6. J. Zhu et al., Challenges and outlook in robotic manipulation of deformable objects, arXiv:2105.01767. 7. D. McConachie, T. Power, P. Mitrano and D. Berenson, Learning when to trust a dynamics model for planning in reduced state spaces, IEEE Robot. Autom. Lett. 5(2) (2020) 3540–3547. 8. F. Zhong, Y. Wang, Z. Wang and Y.-H. Liu, Dual-arm robotic needle insertion with active tissue deformation for autonomous suturing, IEEE Robot. Autom. Lett. 4(3) (2019) 2669–2676. 9. B. Lu, W. Chen, Y.-M. Jin, D. Zhang, Q. Dou, H. K. Chu, P.-A. Heng and Y.-H. Liu, A learning-driven framework with spatial optimization for surgical suture thread reconstruction and autonomous grasping under multiple topologies and environmental noises, in 2020 IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS) (IEEE, 2020), pp. 3075–3082. 10. S. Hirai and T. Wada, Indirect simultaneous positioning of deformable objects with multi-pinching fingers based on an uncertain model, Robotica 18(1) (2000) 3–11. 11. D. Berenson, Manipulation of deformable objects without modeling and simulating deformation, in 2013 IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IEEE, 2013), pp. 4525–4532. 12. J. Zhu, D. Navarro-Alarcon, R. Passama and A. Cherubini, Visionbased manipulation of deformable and rigid objects using subspace projections of 2d contours, Robot. Auton. Syst. 142 (2021) 103798. 13. D. Navarro-Alarcon, Y.-H. Liu, J. G. Romero and P. Li, On the visual deformation servoing of compliant objects: Uncalibrated control methods and experiments, Int. J. Robot. Res. 33(11) (2014) 1462– 1480. 14. W. Zhang, K. Schmeckpeper, P. Chaudhari and K. Daniilidis, Deformable linear object prediction using locally linear latent dynamics, in 2021 IEEE Int. Conf. Robotics and Automation (ICRA) (IEEE, 2021), pp. 13503–13509. 15. C. Shin, P. W. Ferguson, S. A. Pedram, J. Ma, E. P. Dutson and J. Rosen, Autonomous tissue manipulation via surgical robot using
2340001-13
B. P. Murphy & F. Alambeigi
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
learning based model predictive control, in 2019 Int. Conf. Robotics and Automation (ICRA) (IEEE, 2019), pp. 3875–3881. A. Pore, E. Tagliabue, M. Piccinelli, D. DallAlba, A. Casals and P. Fiorini, Learning from demonstrations for autonomous softtissue retraction, in 2021 Int. Symp. Medical Robotics (ISMR) (IEEE, 2021), pp. 1–7. N. D. Nguyen, T. Nguyen, S. Nahavandi, A. Bhatti and G. Guest, Manipulating soft tissues by deep reinforcement learning for autonomous robotic surgery, in 2019 IEEE Int. Systems Conf. (SysCon) (IEEE, 2019), pp. 1–7. F. Liu, E. Su, J. Lu, M. Li and M. C. Yip, Differentiable robotic manipulation of deformable rope-like objects using compliant position-based dynamics, arXiv:2202.09714. A. Shademan, R. S. Decker, J. D. Opfermann, S. Leonard, A. Krieger and P. C. Kim, Supervised autonomous robotic soft tissue surgery, Sci. Transl. Med. 8(337) (2016) 337ra64. F. Faure et al., Sofa: A multi-model framework for interactive physical simulation, in Soft Tissue Biomechanical Modeling for Computer Assisted Surgery (Springer, 2012), pp. 283–321. Y. Adagolodjo, L. Goffin, M. De Mathelin and H. Courtecuisse, Robotic insertion of flexible needle in deformable structures using inverse finite-element simulation, IEEE Trans. Robot. 35(3) (2019) 697–708. F. Alambeigi, Z. Wang, Y.-H. Liu, R. H. Taylor and M. Armand, Toward semi-autonomous cryoablation of kidney tumors via modelindependent deformable tissue manipulation technique, Ann. Biomed. Eng. 46(10) (2018) 1650–1662. F. Alambeigi, Z. Wang, R. Hegeman, Y.-H. Liu and M. Armand, Autonomous data-driven manipulation of unknown anisotropic deformable tissues using unmodelled continuum manipulators, IEEE Robot. Autom. Lett. 4(2) (2019) 254–261. F. Alambeigi, Z. Wang, Y. H. Liu, R. H. Taylor and M. Armand, A versatile data-driven framework for model-independent control of continuum manipulators interacting with obstructed environments with unknown geometry and stiffness, preprint (2020), arXiv:2005.01951. M. Retana, K. Nalamwar, D. T. Conyers, S. F. Atashzar and F. Alambeigi, Autonomous data-driven manipulation of an unknown deformable tissue within constrained environments: A pilot study, in 2022 Int. Symp. Medical Robotics (ISMR) (IEEE, 2022), pp. 1–7. B. Thananjeyan, A. Garg, S. Krishnan, C. Chen, L. Miller and K. Goldberg, Multilateral surgical pattern cutting in 2d orthotropic gauze with deep reinforcement learning policies for tensioning, in 2017 IEEE Int. Conf. Robotics and Automation (ICRA) (IEEE, 2017), pp. 2371–2378. T. Nguyen, N. D. Nguyen, F. Bello and S. Nahavandi, A new tensioning method using deep reinforcement learning for surgical
Braden P. Murphy is an undergraduate student pursuing a B.Sc. degree in mechanical engineering at the University of Texas at Austin, Austin, TX, USA. He has been working under the supervision of Dr. Alambeigi in the Advanced Robotic Technologies for Surgery (ARTS) lab as an Undergraduate Research Assistant since 2022. His current research interests include surgical autonomy and image-guided surgery with visual constraints.
28.
29.
30.
31. 32.
33.
34.
35.
36.
37.
38. 39.
40.
pattern cutting, in 2019 IEEE Int. Conf. Industrial Technology (ICIT) (IEEE, 2019), pp. 1339–1344. S. A. Pedram, P. W. Ferguson, C. Shin, A. Mehta, E. P. Dutson, F. Alambeigi and J. Rosen, Toward synergic learning for autonomous manipulation of deformable tissues via surgical robots: An approximate q-learning approach, in 2020 8th IEEE RAS/EMBS Int. Conf. Biomedical Robotics and Biomechatronics (BioRob) (IEEE, 2020), pp. 878–884. J. Lu, A. Jayakumari, F. Richter, Y. Li and M. C. Yip, Super deep: A surgical perception framework for robotic tissue manipulation using deep learning for feature extraction, in 2021 IEEE Int. Conf. Robotics and Automation (ICRA) (IEEE, 2021), pp. 4783–4789. F. Liu, Z. Li, Y. Han, J. Lu, F. Richter and M. C. Yip, Real-to-sim registration of deformable soft tissue with position-based dynamics for surgical robot autonomy, in 2021 IEEE Int. Conf. Robotics and Automation (ICRA) (IEEE, 2021), pp. 12328–12334. C. G. Broyden, A class of methods for solving nonlinear simultaneous equations, Math. Comput. 19(92) (1965) 577–593. P. Kazanzides, Z. Chen, A. Deguet, G. S. Fischer, R. H. Taylor and S. P. Dimaio, An open-source research kit for the da vinci surgical system, in Proc. — IEEE Int. Conf. Robotics and Automation (Hong Kong, 2014), pp. 6434–6439. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler and A. Y. Ng, Ros: An open-source robot operating system, in ICRA Workshop on Open Source Software, Vol. 3 (Kobe, 2009). C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, Rethinking the inception architecture for computer vision, in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition (Las Vegas, Nevada, 2016), pp. 2818–2826. K. He and J. Sun, Deep residual learning for image recognition, in 2016 IEEE Conf. Computer Vision and Pattern Recognition (CVPR) (Las Vegas, Nevada, 2016), pp. 770–778. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition (Salt Lake City, Utah, 2018), pp. 4510–4520. J. Shi and Tomasi, Good features to track, in 1994 Proc. IEEE Conf. Computer Vision and Pattern Recognition (IEEE, 1994), pp. 593–600. S. Baker and I. Matthews, Lucas–Kanade 20 years on: A unifying framework, Int. J. Comput. Vis. 56(3) (2004) 221–255. H. Zhang, S. Payandeh and J. Dill, On cutting and dissection of virtual deformable objects, in 2004 IEEE Int. Conf. Robotics and Automation, Vol. 4 (IEEE, 2004), pp. 3908–3913. T. Yoshikawa, Manipulability of robotic mechanisms, Int. J. Robot. Res. 4(2) (1985) 3–9.
Farshid Alambeigi (Member, IEEE) received the B.Sc. and M.Sc. degrees in mechanical engineering from the K.N. Toosi University of Technology and the Sharif University of Technology, Tehran, Iran, in 2009 and 2012, respectively. He also received his M.S.E. degree in robotics and the Ph.D. degree in mechanical engineering from the Johns Hopkins University, Baltimore, MD, USA, in 2017 and 2019, respectively. He is currently an Assistant Professor with the Walker Department of Mechanical Engineering and the Texas Robotics at the University of Texas at Austin, TX, USA. His research interests include development, sensing, and control of highly-dexterous continuum manipulators, soft robots, and flexible instruments designed for treatment and diagnosis of various medical applications. In his Advanced Robotic Technologies for Surgery (ARTS) lab, he is working to augment clinician’s skills with robotic technologies towards surgineering and improving surgical outcome.
2340001-14