Query:
学者姓名:邓震
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
柔性内窥镜机器人具有连续体结构,在微创手术领域展现出独特优势,但连续体结构的非线性变形特征导致柔性内窥镜机器人的运动控制精度不足.针对上述问题,本文提出一种基于神经动力学的柔性内窥镜机器人最优遥操作控制方法.首先,通过构建图像空间下内窥镜机器人主从运动映射机制,建立柔性内窥镜运动学模型,获得图像特征速度与驱动速度的映射关系;其次,基因关节运动约束将机器人运动控制转化为二次规划最优控制问题,并使用基于神经动力学的实时求解器进行高效求解;最后,在输尿管镜机器人平台开展实验验证.实验结果表明:本文方法可有效减小人工操作误差和速度振荡,目标点跟踪误差被控制在2.5%以内,同时有效提升了碎石术中器械操控的准确性和稳定性.
Keyword :
主从遥操作 主从遥操作 最优控制 最优控制 柔性内窥镜机器人 柔性内窥镜机器人 神经动力学优化 神经动力学优化
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 张杰阳 , 何帅 , 邓震 et al. 基于神经动力学优化的柔性内窥镜机器人最优遥操作控制 [J]. | 集成技术 , 2025 , 14 (2) : 3-12 . |
MLA | 张杰阳 et al. "基于神经动力学优化的柔性内窥镜机器人最优遥操作控制" . | 集成技术 14 . 2 (2025) : 3-12 . |
APA | 张杰阳 , 何帅 , 邓震 , 何炳蔚 . 基于神经动力学优化的柔性内窥镜机器人最优遥操作控制 . | 集成技术 , 2025 , 14 (2) , 3-12 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In this letter, a constrained visual predictive control strategy (C-VPC) is developed for a robotic flexible endoscope to precisely track target features in narrow environments while adhering to visibility and joint limit constraints. The visibility constraint, crucial for keeping the target feature within the camera's field of view, is explicitly designed using zeroing control barrier functions to uphold the forward invariance of a visible set. To automate the robotic endoscope, kinematic modeling for image-based visual servoing is conducted, resulting in a state-space model that facilitates the prediction of the future evolution of the endoscopic state. The C-VPC method calculates the optimal control input by optimizing the model-based predictions of the future state under visibility and joint limit constraints. Both simulation and experimental results demonstrate the effectiveness of the proposed method in achieving autonomous target tracking and addressing the visibility constraint simultaneously. The proposed method achieved a reduction of 12.3% in Mean Absolute Error (MAE) and 56.0% in variance (VA) compared to classic IBVS.
Keyword :
Bending Bending Cameras Cameras Endoscopes Endoscopes Flexible robotics Flexible robotics image-based visual servoing image-based visual servoing Jacobian matrices Jacobian matrices Predictive control Predictive control robotic flexible endoscope robotic flexible endoscope Robot kinematics Robot kinematics Robots Robots Target tracking Target tracking Visualization Visualization visual predictive control visual predictive control Visual servoing Visual servoing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Deng, Zhen , Liu, Weiwei , Li, Guotao et al. Constrained Visual Predictive Control of a Robotic Flexible Endoscope With Visibility and Joint Limits Constraints [J]. | IEEE ROBOTICS AND AUTOMATION LETTERS , 2025 , 10 (2) : 1513-1520 . |
MLA | Deng, Zhen et al. "Constrained Visual Predictive Control of a Robotic Flexible Endoscope With Visibility and Joint Limits Constraints" . | IEEE ROBOTICS AND AUTOMATION LETTERS 10 . 2 (2025) : 1513-1520 . |
APA | Deng, Zhen , Liu, Weiwei , Li, Guotao , Zhang, Jianwei . Constrained Visual Predictive Control of a Robotic Flexible Endoscope With Visibility and Joint Limits Constraints . | IEEE ROBOTICS AND AUTOMATION LETTERS , 2025 , 10 (2) , 1513-1520 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Autonomous robotic massage holds the potential to alleviate the workload of nurses and improve the quality of healthcare. However, the complexity of the task and the dynamic of the environment present significant challenges for robotic massage. This paper presents a vision-based robotic massage (VBRM) framework that facilitates autonomous robot massaging of the human body while ensuring safe operation in a dynamic environment. The VBRM framework allows the operator to define the massage trajectory by drawing a 2D curve on an RGB image. An interactive trajectory planning method is developed to calculate a 3D massage trajectory from the 2D trajectory. This method accounts for potential movements of the human body and updates the planned trajectory using rigid point cloud registration. Additionally, a hybrid motion/force controller is employed to regulate the motion of the robot's end-effector, considering the possibility of excessive contact force. The proposed framework enables the operator to adjust the massage trajectory and speed according to their requirements. Real-world experiments are conducted to evaluate the efficacy of the proposed approach. The results demonstrate that the framework enables successful planning and execution of the massage task in a dynamic environment. Furthermore, the operator has the flexibility to set the massage trajectory, speed, and contact force arbitrarily, thereby enhancing human-machine interaction.
Keyword :
Interactive trajectory planning Interactive trajectory planning Physical robot-environment interaction Physical robot-environment interaction Robot massage Robot massage Visual servoing Visual servoing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Xu, Qinling , Deng, Zhen , Zeng, Chao et al. Toward automatic robotic massage based on interactive trajectory planning and control [J]. | COMPLEX & INTELLIGENT SYSTEMS , 2024 , 10 (3) : 4397-4407 . |
MLA | Xu, Qinling et al. "Toward automatic robotic massage based on interactive trajectory planning and control" . | COMPLEX & INTELLIGENT SYSTEMS 10 . 3 (2024) : 4397-4407 . |
APA | Xu, Qinling , Deng, Zhen , Zeng, Chao , Li, Zhuoran , He, Bingwei , Zhang, Jianwei . Toward automatic robotic massage based on interactive trajectory planning and control . | COMPLEX & INTELLIGENT SYSTEMS , 2024 , 10 (3) , 4397-4407 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The ability to automatically segment anatomical targets on medical images is crucial for clinical diagnosis and interventional therapy. However, supervised learning methods often require a large number of pixel-wise labels that are difficult to obtain. This paper proposes a weakly supervised glottis segmentation (WSGS) method for training end-to-end neural networks using only point annotations as training labels. This method functions by iteratively generating pseudo-labels and training the segmentation network. An automatic seeded region growing (ASRG) algorithm is introduced to generate quality pseudo labels to diffuse point annotations based on network prediction and image features. Additionally, a novel loss function based on the structural similarity index measure (SSIM) is designed to enhance boundary segmentation. Using the trained network as its core, a glottis state monitor is developed to detect the motion behavior of the glottis and assist the anesthesiologist. Finally, the performance of the proposed approach was evaluated on two datasets, achieving an average mIoU and accuracy of 82.7% and 91.3%. The proposed monitor was demonstrated to be effective, which holds significance in clinical applications.
Keyword :
Glottis segmentation Glottis segmentation Medical image segmentation Medical image segmentation Weakly supervised learning Weakly supervised learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wei, Xiaoxiao , Deng, Zhen , Zheng, Xiaochun et al. Weakly supervised glottis segmentation on endoscopic images with point supervision [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2024 , 92 . |
MLA | Wei, Xiaoxiao et al. "Weakly supervised glottis segmentation on endoscopic images with point supervision" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 92 (2024) . |
APA | Wei, Xiaoxiao , Deng, Zhen , Zheng, Xiaochun , He, Bingwei , Hu, Ying . Weakly supervised glottis segmentation on endoscopic images with point supervision . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2024 , 92 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Tendon-driven continuum robots (TDCRs) have infinite degrees of freedom and high flexibility, posing challenges for accurate modeling and autonomous control, especially in confined environments. This paper presents a model-less optimal visual control (MLOVC) method using neurodynamic optimization to enable autonomous target tracking of TDCRs in confined environments. The TDCR's kinematics are estimated online from sensory data, establishing a connection between the actuator input and visual features. An optimal visual servoing method based on quadratic programming (QP) is developed to ensure precise target tracking without violating the robot's physical constraints. An inverse-free recurrent neural network (RNN)-based neurodynamic optimization method is designed to solve the complex QP problem. Comparative simulations and experiments demonstrate that the proposed method outperforms existing methods in target tracking accuracy and computational efficiency. The RNN-based controller successfully achieves target tracking within constraints in confined environments.
Keyword :
Neurodynamic optimization Neurodynamic optimization Optimal visual control Optimal visual control Robotic ureteroscopy Robotic ureteroscopy Tendon-driven continuum robots Tendon-driven continuum robots Visual servoing Visual servoing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | He, Shuai , Zou, Chaorong , Deng, Zhen et al. Model-less optimal visual control of tendon-driven continuum robots using recurrent neural network-based neurodynamic optimization [J]. | ROBOTICS AND AUTONOMOUS SYSTEMS , 2024 , 182 . |
MLA | He, Shuai et al. "Model-less optimal visual control of tendon-driven continuum robots using recurrent neural network-based neurodynamic optimization" . | ROBOTICS AND AUTONOMOUS SYSTEMS 182 (2024) . |
APA | He, Shuai , Zou, Chaorong , Deng, Zhen , Liu, Weiwei , He, Bingwei , Zhang, Jianwei . Model-less optimal visual control of tendon-driven continuum robots using recurrent neural network-based neurodynamic optimization . | ROBOTICS AND AUTONOMOUS SYSTEMS , 2024 , 182 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Accurate control of continuum robots in confined environments presents a significant challenge due to the need for a precise kinematic model, which is susceptible to external interference. This paper introduces a model- less optimal visual control (MLOVC) method that enables a tendon-sheath-driven continuum robot (TSDCR) to effectively track visual targets in a confined environment while ensuring stability. The method allows for intraluminal navigation of TSDCRs along narrow lumens. To account for the presence of external outliers, a robust Jacobian estimation method is proposed, wherein improved iterative reweighted least squares with sliding windows are used to online calculate the robot's Jacobian matrix from sensing data. The estimated Jacobian establishes the motion relationship between the visual feature and the actuation. Furthermore, an optimal visual control method based on quadratic programming (QP) is designed for visual target tracking, while considering the robot's physical constraint and control constraints. The MLOVC method for visual tracking provides a reliable alternative that does not rely on the precise kinematics of TSDCRs and takes into consideration the impact of outliers. The control stability of the proposed approach is demonstrated through Lyapunov analysis. Simulations and experiments are conducted to evaluate the effectiveness of the MLOVC method, and the results demonstrate that it enhances tracking performance in terms of accuracy and stability.
Keyword :
Continuum robot Continuum robot Optimal visual control Optimal visual control Robust Jacobian estimate Robust Jacobian estimate Stability analysis Stability analysis
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Chuanchuan , Deng, Zhen , Zeng, Chao et al. Optimal visual control of tendon-sheath-driven continuum robots with robust Jacobian estimation in confined environments [J]. | MECHATRONICS , 2024 , 104 . |
MLA | Pan, Chuanchuan et al. "Optimal visual control of tendon-sheath-driven continuum robots with robust Jacobian estimation in confined environments" . | MECHATRONICS 104 (2024) . |
APA | Pan, Chuanchuan , Deng, Zhen , Zeng, Chao , He, Bingwei , Zhang, Jianwei . Optimal visual control of tendon-sheath-driven continuum robots with robust Jacobian estimation in confined environments . | MECHATRONICS , 2024 , 104 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
PurposeThe re-measurement of full-arch implant digital impressions is an important step in denture restoration. This paper provides an efficient oral photogrammetry technology using projective invariant marker, applied in the re-measurement of full-arch implant digital impressions.MethodsWe have developed a self-recognizing marker with projection invariance, along with its detection code. The marker is installed on the scanning body and used for photogrammetric measurements. Triangulation is utilized to determine the 3D coordinates of the marker, followed by a series of post-processing steps to obtain more accurate 3D coordinates.ResultsThe experimental data indicate that the optimal working distance is between 200 and 250 mm, with a minimum measurement error of less than 0.05 mm and an average measurement error of 0.10 mm. The measurement time is less than 2 min.ConclusionsThe experimental results show that the photogrammetric system can obtain reliable positions of full-arch implants with efficient photogrammetry, without the need to enter the patient's oral cavity, and has potential clinical application value.
Keyword :
Fiducial marker Fiducial marker Implant Implant Photogrammetry Photogrammetry Projective invariant Projective invariant
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Yanghai , Zhu, Mingzhu , He, Bingwei et al. Efficient intraoral photogrammetry using self-identifying projective invariant marker [J]. | INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY , 2023 , 19 (4) : 767-778 . |
MLA | Chen, Yanghai et al. "Efficient intraoral photogrammetry using self-identifying projective invariant marker" . | INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY 19 . 4 (2023) : 767-778 . |
APA | Chen, Yanghai , Zhu, Mingzhu , He, Bingwei , Deng, Zhen . Efficient intraoral photogrammetry using self-identifying projective invariant marker . | INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY , 2023 , 19 (4) , 767-778 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Nasotracheal intubation (NTI) is one of the most commonly performed procedures in anesthesia and is considered the gold standard for securing the airway of patients. Endoscope operation is critical to the success of NTI. However, this operation remains challenging as it requires the surgeon to classify anatomical landmarks and detect heading targets of the endoscope tip in a sequence of monocular images. To alleviate this problem, this study presents a learning-based navigation method that automatically classifies four different anatomical landmarks and detects the heading target of the endoscope tip from endoscopic images. First, an end-to-end multitask network is introduced that consists of a branch for anatomical landmark classification and another for heading target detection. In addition, a convolutional attention module is designed to improve network performance by combining spatial and channel attention. Second, an endoscopic dataset named intuNav is built for network training. The trained network calculates navigation information without image prior knowledge. Finally, extensive experiments on the built dataset and endoscopic videos demonstrate the high performance of our method, achieving classification (94%) and detection (79.4%) accuracies. The results also indicate that the proposed method is effective and efficient in navigation generation during NTI.
Keyword :
Deep learning Deep learning Endoscopic navigation Endoscopic navigation Image classification and detection Image classification and detection Nasotracheal intubation Nasotracheal intubation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Deng, Zhen , Wei, Xiaoxiao , Zheng, Xiaochun et al. Automatic endoscopic navigation based on attention-based network for Nasotracheal Intubation [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 86 . |
MLA | Deng, Zhen et al. "Automatic endoscopic navigation based on attention-based network for Nasotracheal Intubation" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 86 (2023) . |
APA | Deng, Zhen , Wei, Xiaoxiao , Zheng, Xiaochun , He, Bingwei . Automatic endoscopic navigation based on attention-based network for Nasotracheal Intubation . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 86 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Endoscopic operation is one of the most difficult parts of nasotracheal intubation (NTI), a surgical procedure used to secure the airway of a patient. Due to the requirement of eye-hand coordination, manual operation of a flexible endoscope is challenging, even for an experienced surgeon. To enhance intubation efficiency, this paper developed a master-slave robotic nasotracheal intubation system (RNIS) for endoscopic operation. Movements in three degrees of freedom of the endoscope are controlled by the RNIS. An assisted teleoperation control strategy is designed to assist the operator in remotely controlling the pose, i.e., position and orientation, of the endoscope tip via a joystick. To ensure the efficiency of intubation, visual feedback assistance is hereby proposed, which fine-tunes the orientation of the endoscope tip as needed. The proposed system and methods are experimentally validated on a motion simulator and a phantom. The results demonstrated that the master- slave RNIS can successfully insert the flexible endoscope into the trachea of a phantom through the nasal cavity.
Keyword :
Nasotracheal intubation Nasotracheal intubation Robotic endoscope Robotic endoscope Surgical robotics Surgical robotics Teleoperation control Teleoperation control Visual feedback assistance Visual feedback assistance
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Deng, Zhen , Zhang, Shengzhan , Guo, Yuxin et al. Assisted teleoperation control of robotic endoscope with visual feedback for nasotracheal intubation [J]. | ROBOTICS AND AUTONOMOUS SYSTEMS , 2023 , 172 . |
MLA | Deng, Zhen et al. "Assisted teleoperation control of robotic endoscope with visual feedback for nasotracheal intubation" . | ROBOTICS AND AUTONOMOUS SYSTEMS 172 (2023) . |
APA | Deng, Zhen , Zhang, Shengzhan , Guo, Yuxin , Jiang, Hongqi , Zheng, Xiaochun , He, Bingwei . Assisted teleoperation control of robotic endoscope with visual feedback for nasotracheal intubation . | ROBOTICS AND AUTONOMOUS SYSTEMS , 2023 , 172 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Automation in robotic endoscopy surgery is an important but challenging research topic. The flexible deformation of the endoscope makes it difficult for surgeons to steer it, especially in a narrow workspace. The safety of endoscope steering has been a major concern in a robotic endoscope. This study presents a safety-aware control framework to enable a robotic endotracheal intubation system (RNIS) to automatically steer the flexible endoscope during nasotracheal intubation (NTI). With the endoscopic image as feedback, we used the fusion of image intensity and optical flow measurements to detect the lumen center. A velocity-based orientation controller is designed to adjust the orientation of the endoscope tip to track the lumen center. To ensure surgical safety, the relative depth from the endoscope tip to the lumen surface is calculated from endoscopic images. With the relative depth as feedback, the feed motion of the endoscope is controlled to prevent the collision between the endoscope tip and its surrounding tissue. The position and orientation of the endoscope tip can be adjusted by the RNIS during NTI. Finally, extensive experiments based on a training manikin were performed. The results indicate that the proposed control framework enables the RNIS to successfully steer the flexible endoscope along the upper respiratory tract with high safety. The performance of lumen center detection is good according to the annotation of experienced users.
Keyword :
Depth estimation Depth estimation Endoscopic image processing Endoscopic image processing Nasotracheal intubation Nasotracheal intubation Robotic endoscopy Robotic endoscopy Visual feedback control Visual feedback control
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Deng, Zhen , Jiang, Peijie , Guo, Yuxin et al. Safety-aware robotic steering of a flexible endoscope for nasotracheal intubation [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 82 . |
MLA | Deng, Zhen et al. "Safety-aware robotic steering of a flexible endoscope for nasotracheal intubation" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 82 (2023) . |
APA | Deng, Zhen , Jiang, Peijie , Guo, Yuxin , Zhang, Shengzhan , Hu, Ying , Zheng, Xiaochun et al. Safety-aware robotic steering of a flexible endoscope for nasotracheal intubation . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 82 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |