Query:
学者姓名:邓震
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
Contact-rich manipulation tasks are difficult to program to be performed by robots. Traditional compliance control methods, such as impedance control, rely excessively on environmental models and are ineffective in the face of increasingly complex contact tasks. Reinforcement learning (RL) has now achieved great success in the fields of games and robotics. Autonomous learning of manipulation skills can empower robots with autonomous decision-making capabilities. To this end, this work introduces a novel learning framework that combines deep RL (DRL) and variable impedance control (VIC) to achieve robotic massage tasks. A skill policy is learned in joint space, which outputs the desired impedance gain and angle for each joint. To address the limitations of the sparse reward of DRL, an intrinsic curiosity module (ICM) was designed, which generates the intrinsic reward to encourage robots to explore more effectively. Simulation and real experiments were performed to verify the effectiveness of the proposed method. Our experiments demonstrate that contact-rich massage skills can be learned through the VIC-DRL framework based on the joint space in a simulation environment, and that the ICM can improve learning efficiency and overall performance in the task. Moreover, the generated policies have been demonstrated to still perform effectively on a real-world robot.
Keyword :
Aerospace electronics Aerospace electronics Decision making Decision making Deep learning Deep learning Games Games Impedance Impedance Reinforcement learning Reinforcement learning Task analysis Task analysis
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Zhuoran , Zeng, Chao , Deng, Zhen et al. Learning Variable Impedance Control for Robotic Massage With Deep Reinforcement Learning: A Novel Learning Framework [J]. | IEEE SYSTEMS MAN AND CYBERNETICS MAGAZINE , 2024 , 10 (1) : 17-27 . |
MLA | Li, Zhuoran et al. "Learning Variable Impedance Control for Robotic Massage With Deep Reinforcement Learning: A Novel Learning Framework" . | IEEE SYSTEMS MAN AND CYBERNETICS MAGAZINE 10 . 1 (2024) : 17-27 . |
APA | Li, Zhuoran , Zeng, Chao , Deng, Zhen , Xu, Qinling , He, Bingwei , Zhang, Jianwei . Learning Variable Impedance Control for Robotic Massage With Deep Reinforcement Learning: A Novel Learning Framework . | IEEE SYSTEMS MAN AND CYBERNETICS MAGAZINE , 2024 , 10 (1) , 17-27 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Tendon-driven continuum robots (TDCRs) with mechanical compliance have gained popularity in natural orifice transluminal endoscopic surgery (NOTES). Teleoperation problems of the TDCRs involve performance objectives in addition to the visibility constraint. Handling the coupling between potentially conflicting objectives and the visibility constraint remains challenging for surgeons when operating TDCRs. This paper presents a shared control method to assist in the teleoperation of the TDCRs, which guarantees visual targets remain within the field of view (FoV) of the TDCR. The visibility constraint is explicitly defined using a zeroing control barrier function, which is specified in terms of the forward invariance of a visible set. To ensure accuracy, the Jacobian matrix of the system is approximated online using sensing data. Then, the visibility constraint, along with the robot's physical constraints, is integrated into a quadratic program (QP) framework. This allows for the optimization of the control input of the operator subject to constraints, thus preserving visibility. Finally, simulations and experiments were conducted to demonstrate the effectiveness of the proposed approach under two teleoperation modes. The results show that the proposed method achieved a reduction of approximately 70% in ITP and 43% in MAE compared to direct teleoperation.
Keyword :
continuum robot continuum robot control barrier function control barrier function optimal visual control optimal visual control Robotic endoscopic surgery Robotic endoscopic surgery
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Deng, Zhen , Wei, Xiaoxiao , Pan, Chuanchuan et al. Shared Control of Tendon-Driven Continuum Robots Using Visibility-Guaranteed Optimization for Endoscopic Surgery [J]. | IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS , 2024 , 6 (2) : 487-497 . |
MLA | Deng, Zhen et al. "Shared Control of Tendon-Driven Continuum Robots Using Visibility-Guaranteed Optimization for Endoscopic Surgery" . | IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 6 . 2 (2024) : 487-497 . |
APA | Deng, Zhen , Wei, Xiaoxiao , Pan, Chuanchuan , Li, Guotao , Hu, Ying . Shared Control of Tendon-Driven Continuum Robots Using Visibility-Guaranteed Optimization for Endoscopic Surgery . | IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS , 2024 , 6 (2) , 487-497 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Autonomous robotic massage holds the potential to alleviate the workload of nurses and improve the quality of healthcare. However, the complexity of the task and the dynamic of the environment present significant challenges for robotic massage. This paper presents a vision-based robotic massage (VBRM) framework that facilitates autonomous robot massaging of the human body while ensuring safe operation in a dynamic environment. The VBRM framework allows the operator to define the massage trajectory by drawing a 2D curve on an RGB image. An interactive trajectory planning method is developed to calculate a 3D massage trajectory from the 2D trajectory. This method accounts for potential movements of the human body and updates the planned trajectory using rigid point cloud registration. Additionally, a hybrid motion/force controller is employed to regulate the motion of the robot's end-effector, considering the possibility of excessive contact force. The proposed framework enables the operator to adjust the massage trajectory and speed according to their requirements. Real-world experiments are conducted to evaluate the efficacy of the proposed approach. The results demonstrate that the framework enables successful planning and execution of the massage task in a dynamic environment. Furthermore, the operator has the flexibility to set the massage trajectory, speed, and contact force arbitrarily, thereby enhancing human-machine interaction.
Keyword :
Interactive trajectory planning Interactive trajectory planning Physical robot-environment interaction Physical robot-environment interaction Robot massage Robot massage Visual servoing Visual servoing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Xu, Qinling , Deng, Zhen , Zeng, Chao et al. Toward automatic robotic massage based on interactive trajectory planning and control [J]. | COMPLEX & INTELLIGENT SYSTEMS , 2024 , 10 (3) : 4397-4407 . |
MLA | Xu, Qinling et al. "Toward automatic robotic massage based on interactive trajectory planning and control" . | COMPLEX & INTELLIGENT SYSTEMS 10 . 3 (2024) : 4397-4407 . |
APA | Xu, Qinling , Deng, Zhen , Zeng, Chao , Li, Zhuoran , He, Bingwei , Zhang, Jianwei . Toward automatic robotic massage based on interactive trajectory planning and control . | COMPLEX & INTELLIGENT SYSTEMS , 2024 , 10 (3) , 4397-4407 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The ability to automatically segment anatomical targets on medical images is crucial for clinical diagnosis and interventional therapy. However, supervised learning methods often require a large number of pixel-wise labels that are difficult to obtain. This paper proposes a weakly supervised glottis segmentation (WSGS) method for training end-to-end neural networks using only point annotations as training labels. This method functions by iteratively generating pseudo-labels and training the segmentation network. An automatic seeded region growing (ASRG) algorithm is introduced to generate quality pseudo labels to diffuse point annotations based on network prediction and image features. Additionally, a novel loss function based on the structural similarity index measure (SSIM) is designed to enhance boundary segmentation. Using the trained network as its core, a glottis state monitor is developed to detect the motion behavior of the glottis and assist the anesthesiologist. Finally, the performance of the proposed approach was evaluated on two datasets, achieving an average mIoU and accuracy of 82.7% and 91.3%. The proposed monitor was demonstrated to be effective, which holds significance in clinical applications.
Keyword :
Glottis segmentation Glottis segmentation Medical image segmentation Medical image segmentation Weakly supervised learning Weakly supervised learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wei, Xiaoxiao , Deng, Zhen , Zheng, Xiaochun et al. Weakly supervised glottis segmentation on endoscopic images with point supervision [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2024 , 92 . |
MLA | Wei, Xiaoxiao et al. "Weakly supervised glottis segmentation on endoscopic images with point supervision" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 92 (2024) . |
APA | Wei, Xiaoxiao , Deng, Zhen , Zheng, Xiaochun , He, Bingwei , Hu, Ying . Weakly supervised glottis segmentation on endoscopic images with point supervision . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2024 , 92 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The ability to effectively classify human emotion states is critically important for human-computer or human-robot interactions. However, emotion classification with physiological signals is still a challenging problem due to the diversity of emotion expression and the characteristic differences in different modal signals. A novel learning-based network architecture is presented that can exploit four-modal physiological signals, electrocardiogram, electrodermal activity, electromyography, and blood volume pulse, and make a classification of emotion states. It features two kinds of attention modules, feature-level, and semantic-level, which drive the network to focus on the information-rich features by mimicking the human attention mechanism. The feature-level attention module encodes the rich information of each physiological signal. While the semantic-level attention module captures the semantic dependencies among modals. The performance of the designed network is evaluated with the open-source Wearable Stress and Affect Detection dataset. The developed emotion classification system achieves an accuracy of 83.88%. Results demonstrated that the proposed network could effectively process four-modal physiological signals and achieve high accuracy of emotion classification. © 2024 The Author(s). Cognitive Computation and Systems published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology and Shenzhen University.
Keyword :
affective computing affective computing neural net architecture neural net architecture neural nets neural nets
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zou, C. , Deng, Z. , He, B. et al. Emotion classification with multi-modal physiological signals using multi-attention-based neural network [J]. | Cognitive Computation and Systems , 2024 . |
MLA | Zou, C. et al. "Emotion classification with multi-modal physiological signals using multi-attention-based neural network" . | Cognitive Computation and Systems (2024) . |
APA | Zou, C. , Deng, Z. , He, B. , Yan, M. , Wu, J. , Zhu, Z. . Emotion classification with multi-modal physiological signals using multi-attention-based neural network . | Cognitive Computation and Systems , 2024 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Accurate control of continuum robots in confined environments presents a significant challenge due to the need for a precise kinematic model, which is susceptible to external interference. This paper introduces a model-less optimal visual control (MLOVC) method that enables a tendon-sheath-driven continuum robot (TSDCR) to effectively track visual targets in a confined environment while ensuring stability. The method allows for intraluminal navigation of TSDCRs along narrow lumens. To account for the presence of external outliers, a robust Jacobian estimation method is proposed, wherein improved iterative reweighted least squares with sliding windows are used to online calculate the robot's Jacobian matrix from sensing data. The estimated Jacobian establishes the motion relationship between the visual feature and the actuation. Furthermore, an optimal visual control method based on quadratic programming (QP) is designed for visual target tracking, while considering the robot's physical constraint and control constraints. The MLOVC method for visual tracking provides a reliable alternative that does not rely on the precise kinematics of TSDCRs and takes into consideration the impact of outliers. The control stability of the proposed approach is demonstrated through Lyapunov analysis. Simulations and experiments are conducted to evaluate the effectiveness of the MLOVC method, and the results demonstrate that it enhances tracking performance in terms of accuracy and stability. © 2024 Elsevier Ltd
Keyword :
Jacobian matrices Jacobian matrices Machine vision Machine vision Quadratic programming Quadratic programming Robot programming Robot programming Robot vision Robot vision Visual servoing Visual servoing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Pan, Chuanchuan , Deng, Zhen , Zeng, Chao et al. Optimal visual control of tendon-sheath-driven continuum robots with robust Jacobian estimation in confined environments [J]. | Mechatronics , 2024 , 104 . |
MLA | Pan, Chuanchuan et al. "Optimal visual control of tendon-sheath-driven continuum robots with robust Jacobian estimation in confined environments" . | Mechatronics 104 (2024) . |
APA | Pan, Chuanchuan , Deng, Zhen , Zeng, Chao , He, Bingwei , Zhang, Jianwei . Optimal visual control of tendon-sheath-driven continuum robots with robust Jacobian estimation in confined environments . | Mechatronics , 2024 , 104 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Automation in robotic endoscopy surgery is an important but challenging research topic. The flexible deformation of the endoscope makes it difficult for surgeons to steer it, especially in a narrow workspace. The safety of endoscope steering has been a major concern in a robotic endoscope. This study presents a safety-aware control framework to enable a robotic endotracheal intubation system (RNIS) to automatically steer the flexible endoscope during nasotracheal intubation (NTI). With the endoscopic image as feedback, we used the fusion of image intensity and optical flow measurements to detect the lumen center. A velocity-based orientation controller is designed to adjust the orientation of the endoscope tip to track the lumen center. To ensure surgical safety, the relative depth from the endoscope tip to the lumen surface is calculated from endoscopic images. With the relative depth as feedback, the feed motion of the endoscope is controlled to prevent the collision between the endoscope tip and its surrounding tissue. The position and orientation of the endoscope tip can be adjusted by the RNIS during NTI. Finally, extensive experiments based on a training manikin were performed. The results indicate that the proposed control framework enables the RNIS to successfully steer the flexible endoscope along the upper respiratory tract with high safety. The performance of lumen center detection is good according to the annotation of experienced users.
Keyword :
Depth estimation Depth estimation Endoscopic image processing Endoscopic image processing Nasotracheal intubation Nasotracheal intubation Robotic endoscopy Robotic endoscopy Visual feedback control Visual feedback control
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Deng, Zhen , Jiang, Peijie , Guo, Yuxin et al. Safety-aware robotic steering of a flexible endoscope for nasotracheal intubation [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 82 . |
MLA | Deng, Zhen et al. "Safety-aware robotic steering of a flexible endoscope for nasotracheal intubation" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 82 (2023) . |
APA | Deng, Zhen , Jiang, Peijie , Guo, Yuxin , Zhang, Shengzhan , Hu, Ying , Zheng, Xiaochun et al. Safety-aware robotic steering of a flexible endoscope for nasotracheal intubation . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 82 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Nasotracheal intubation (NTI) is one of the most commonly performed procedures in anesthesia and is considered the gold standard for securing the airway of patients. Endoscope operation is critical to the success of NTI. However, this operation remains challenging as it requires the surgeon to classify anatomical landmarks and detect heading targets of the endoscope tip in a sequence of monocular images. To alleviate this problem, this study presents a learning-based navigation method that automatically classifies four different anatomical landmarks and detects the heading target of the endoscope tip from endoscopic images. First, an end-to-end multitask network is introduced that consists of a branch for anatomical landmark classification and another for heading target detection. In addition, a convolutional attention module is designed to improve network performance by combining spatial and channel attention. Second, an endoscopic dataset named intuNav is built for network training. The trained network calculates navigation information without image prior knowledge. Finally, extensive experiments on the built dataset and endoscopic videos demonstrate the high performance of our method, achieving classification (94%) and detection (79.4%) accuracies. The results also indicate that the proposed method is effective and efficient in navigation generation during NTI.
Keyword :
Deep learning Deep learning Endoscopic navigation Endoscopic navigation Image classification and detection Image classification and detection Nasotracheal intubation Nasotracheal intubation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Deng, Zhen , Wei, Xiaoxiao , Zheng, Xiaochun et al. Automatic endoscopic navigation based on attention-based network for Nasotracheal Intubation [J]. | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 86 . |
MLA | Deng, Zhen et al. "Automatic endoscopic navigation based on attention-based network for Nasotracheal Intubation" . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL 86 (2023) . |
APA | Deng, Zhen , Wei, Xiaoxiao , Zheng, Xiaochun , He, Bingwei . Automatic endoscopic navigation based on attention-based network for Nasotracheal Intubation . | BIOMEDICAL SIGNAL PROCESSING AND CONTROL , 2023 , 86 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Endoscopic operation is one of the most difficult parts of nasotracheal intubation (NTI), a surgical procedure used to secure the airway of a patient. Due to the requirement of eye-hand coordination, manual operation of a flexible endoscope is challenging, even for an experienced surgeon. To enhance intubation efficiency, this paper developed a master-slave robotic nasotracheal intubation system (RNIS) for endoscopic operation. Movements in three degrees of freedom of the endoscope are controlled by the RNIS. An assisted teleoperation control strategy is designed to assist the operator in remotely controlling the pose, i.e., position and orientation, of the endoscope tip via a joystick. To ensure the efficiency of intubation, visual feedback assistance is hereby proposed, which fine-tunes the orientation of the endoscope tip as needed. The proposed system and methods are experimentally validated on a motion simulator and a phantom. The results demonstrated that the master- slave RNIS can successfully insert the flexible endoscope into the trachea of a phantom through the nasal cavity.
Keyword :
Nasotracheal intubation Nasotracheal intubation Robotic endoscope Robotic endoscope Surgical robotics Surgical robotics Teleoperation control Teleoperation control Visual feedback assistance Visual feedback assistance
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Deng, Zhen , Zhang, Shengzhan , Guo, Yuxin et al. Assisted teleoperation control of robotic endoscope with visual feedback for nasotracheal intubation [J]. | ROBOTICS AND AUTONOMOUS SYSTEMS , 2023 , 172 . |
MLA | Deng, Zhen et al. "Assisted teleoperation control of robotic endoscope with visual feedback for nasotracheal intubation" . | ROBOTICS AND AUTONOMOUS SYSTEMS 172 (2023) . |
APA | Deng, Zhen , Zhang, Shengzhan , Guo, Yuxin , Jiang, Hongqi , Zheng, Xiaochun , He, Bingwei . Assisted teleoperation control of robotic endoscope with visual feedback for nasotracheal intubation . | ROBOTICS AND AUTONOMOUS SYSTEMS , 2023 , 172 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
This paper tackles the task of estimating the state of an object in a robotic hand. The state of the object includes its shape and posture, which is critically important for robotic in-hand manipulation. However, in-hand objects have self-occlusion, making it challenging to perceive their complete shape and posture. To address this challenge, this work proposed a point-clouds processing framework to achieve shape completion and pose estimation of the in-hand objects. Firstly, the input point cloud are segmented based region growing algorithm to obtain the points belonging to the target object. Then, we design a neural network with the auto-encoder structure to perform shape completion and 6D pose estimation of the in-hand object. The latent feature of the network is used to regress the 6D pose, i.e., position and orientation, of the object. The effectiveness of the proposed framework is evaluated by comparison experiment and real-word experiment. Experimental results show that our approach achieves significantly high accuracy in the shape completion and pose estimation of robotic in-hand objects. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
Keyword :
Deep learning Deep learning Pose estimation Pose estimation Shape completion Shape completion
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Jin, X. , Deng, Z. , Zhang, Z. et al. Shape and Pose Reconstruction of Robotic In-Hand Objects from a Single Depth Camera [未知]. |
MLA | Jin, X. et al. "Shape and Pose Reconstruction of Robotic In-Hand Objects from a Single Depth Camera" [未知]. |
APA | Jin, X. , Deng, Z. , Zhang, Z. , Lu, L. , Gao, G. , He, B. . Shape and Pose Reconstruction of Robotic In-Hand Objects from a Single Depth Camera [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |