• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:郑明魁

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 9 >
一种基于类小波变换的无线电频谱监测数据无损压缩方法
期刊论文 | 2024 , 38 (7) , 152-158 | 电子测量与仪器学报
Abstract&Keyword Cite

Abstract :

无线电频谱监测海量数据存储和分析是无线电监管工作的重要组成部分.频谱数据具有时间相关性以及不同频点间的相关冗余,对此本文设计了一种基于类小波变换的无线电频谱监测数据无损压缩方法.该方法首先基于时间相关性将一维频谱信号转换成二维矩阵;转换成二维矩阵后数据在水平方向以及垂直方向都存在冗余,算法采用卷积神经网络来代替传统小波中的预测和更新模块,并引入了自适应压缩块来处理不同维度的特征,从而获得更紧凑的频谱数据表示.研究进一步设计了一种基于上下文的深度熵模型,利用类小波变换不同子带系数获得熵编码参数,以此估计累积概率,从而实现频谱数据的压缩.实验结果表明,与已有的Deflate等传统频谱监测数据无损压缩方法相比,本文算法有进一步的性能提升,与典型的JPEG2000、PNG、JPEG-LS等二维图像无损压缩方法相比,本文所提出的方法的压缩效果也提高了20%以上.

Keyword :

卷积神经网络 卷积神经网络 无损压缩 无损压缩 熵编码 熵编码 类小波变换 类小波变换 频谱监测数据 频谱监测数据

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 张承琰 , 郑明魁 , 刘会明 et al. 一种基于类小波变换的无线电频谱监测数据无损压缩方法 [J]. | 电子测量与仪器学报 , 2024 , 38 (7) : 152-158 .
MLA 张承琰 et al. "一种基于类小波变换的无线电频谱监测数据无损压缩方法" . | 电子测量与仪器学报 38 . 7 (2024) : 152-158 .
APA 张承琰 , 郑明魁 , 刘会明 , 易天儒 , 李少良 , 陈祖儿 . 一种基于类小波变换的无线电频谱监测数据无损压缩方法 . | 电子测量与仪器学报 , 2024 , 38 (7) , 152-158 .
Export to NoteExpress RIS BibTex

Version :

基于深度值前向投影的视频帧插值模型
期刊论文 | 2024 , 4 (04) , 5-8 | 信息技术与信息化
Abstract&Keyword Cite

Abstract :

视频帧插值技术应用广泛,其目的是在给定两个连续的视频帧条件下,生成中间帧。针对向投影过程中经常出现的多个像素投影到同一个位置的像素重叠问题,提出了一种基于深度值前向投影的视频帧插值模型。根据提出的深度估计模块的深度值对前向投影过程进行线性加权,并具有深度平移不变性,对重叠像素区域的前景物体边界和背景像素的像素重建有一定的效果提升。实验结果表明,所提出的算法在公开的视频帧内插数据集Vimeo-90k上测试结果良好,与其他算法相比,在PSNR、SSIM和LPIPS性能评价指标上均能达到较为优秀的性能指标,验证了算法的优越性。

Keyword :

前向投影 前向投影 图像合成 图像合成 深度估计 深度估计 视频帧插值 视频帧插值 视频帧预测 视频帧预测

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 陈祖儿 , 郑明魁 , 张承琰 et al. 基于深度值前向投影的视频帧插值模型 [J]. | 信息技术与信息化 , 2024 , 4 (04) : 5-8 .
MLA 陈祖儿 et al. "基于深度值前向投影的视频帧插值模型" . | 信息技术与信息化 4 . 04 (2024) : 5-8 .
APA 陈祖儿 , 郑明魁 , 张承琰 , 易天儒 . 基于深度值前向投影的视频帧插值模型 . | 信息技术与信息化 , 2024 , 4 (04) , 5-8 .
Export to NoteExpress RIS BibTex

Version :

一种基于神经辐射场的跨场景新视图合成方法
期刊论文 | 2024 , 31 (08) , 108-112 | 广播电视网络
Abstract&Keyword Cite

Abstract :

本文提出一种基于全局注意力融合机制的可泛化NeRF框架,使用重投影特征提取作为前馈先验模块,设计了一种基于两阶段注意力聚合机制的解码器,实现了一种端到端可跨场景泛化的新视图合成方法。通过定性与定量的实验结果比较分析,证明本方案能够充分利用已知源图像中的深层特征信息,并且学习场景中多视图空间特征的联系,从而指导神经辐射场更好地重建未见场景的三维表征,更加精确地驱动体渲染在真实复杂环境中的光线上进行新视图渲染。

Keyword :

注意力机制 注意力机制 神经辐射场 神经辐射场 视图合成 视图合成

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 刘阳 , 郑明魁 , 叶张帆 . 一种基于神经辐射场的跨场景新视图合成方法 [J]. | 广播电视网络 , 2024 , 31 (08) : 108-112 .
MLA 刘阳 et al. "一种基于神经辐射场的跨场景新视图合成方法" . | 广播电视网络 31 . 08 (2024) : 108-112 .
APA 刘阳 , 郑明魁 , 叶张帆 . 一种基于神经辐射场的跨场景新视图合成方法 . | 广播电视网络 , 2024 , 31 (08) , 108-112 .
Export to NoteExpress RIS BibTex

Version :

MDU-sampling: Multi-domain uniform sampling method for large-scale outdoor LiDAR point cloud registration SCIE
期刊论文 | 2024 , 60 (5) | ELECTRONICS LETTERS
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Sampling is a crucial concern for outdoor light detection and ranging (LiDAR) point cloud registration due to the large amounts of point cloud. Numerous algorithms have been devised to tackle this issue by selecting key points. However, these approaches often necessitate extensive computations, giving rise to challenges related to computational time and complexity. This letter proposes a multi-domain uniform sampling method (MDU-sampling) for large-scale outdoor LiDAR point cloud registration. The feature extraction based on deep learning aggregates information from the neighbourhood, so there is redundancy between adjacent features. The sampling method in this paper is carried out in the spatial and feature domains. First, uniform sampling is executed in the spatial domain, maintaining local point cloud uniformity. This is believed to preserve more potential point correspondences and is beneficial for subsequent neighbourhood information aggregation and feature sampling. Subsequently, a secondary sampling in the feature domain is performed to reduce redundancy among the features of neighbouring points. Notably, only points on the same ring in LiDAR data are considered as neighbouring points, eliminating the need for additional neighbouring point search and thereby speeding up processing rates. Experimental results demonstrate that the approach enhances accuracy and robustness compared with benchmarks. The feature extraction based on deep learning aggregates information from the neighbourhood, so there is redundancy between adjacent features. The sampling method in this paper is carried out in the spatial and feature domains, reducing the computational resources for registration. The proposed method preserves more effective information compared to other algorithms. Points are only considered on the same ring in LiDAR data as neighbouring points, eliminating the need for additional neighbouring point search. This makes it efficient and suitable for large-scale outdoor LiDAR point cloud registration. image

Keyword :

artificial intelligence artificial intelligence robot vision robot vision signal processing signal processing SLAM (robots) SLAM (robots)

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ou, Wengjun , Zheng, Mingkui , Zheng, Haifeng . MDU-sampling: Multi-domain uniform sampling method for large-scale outdoor LiDAR point cloud registration [J]. | ELECTRONICS LETTERS , 2024 , 60 (5) .
MLA Ou, Wengjun et al. "MDU-sampling: Multi-domain uniform sampling method for large-scale outdoor LiDAR point cloud registration" . | ELECTRONICS LETTERS 60 . 5 (2024) .
APA Ou, Wengjun , Zheng, Mingkui , Zheng, Haifeng . MDU-sampling: Multi-domain uniform sampling method for large-scale outdoor LiDAR point cloud registration . | ELECTRONICS LETTERS , 2024 , 60 (5) .
Export to NoteExpress RIS BibTex

Version :

MDU-sampling: Multi-domain uniform sampling method for large-scale outdoor LiDAR point cloud registration SCIE
期刊论文 | 2024 , 60 (5) | ELECTRONICS LETTERS
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Sampling is a crucial concern for outdoor light detection and ranging (LiDAR) point cloud registration due to the large amounts of point cloud. Numerous algorithms have been devised to tackle this issue by selecting key points. However, these approaches often necessitate extensive computations, giving rise to challenges related to computational time and complexity. This letter proposes a multi-domain uniform sampling method (MDU-sampling) for large-scale outdoor LiDAR point cloud registration. The feature extraction based on deep learning aggregates information from the neighbourhood, so there is redundancy between adjacent features. The sampling method in this paper is carried out in the spatial and feature domains. First, uniform sampling is executed in the spatial domain, maintaining local point cloud uniformity. This is believed to preserve more potential point correspondences and is beneficial for subsequent neighbourhood information aggregation and feature sampling. Subsequently, a secondary sampling in the feature domain is performed to reduce redundancy among the features of neighbouring points. Notably, only points on the same ring in LiDAR data are considered as neighbouring points, eliminating the need for additional neighbouring point search and thereby speeding up processing rates. Experimental results demonstrate that the approach enhances accuracy and robustness compared with benchmarks. The feature extraction based on deep learning aggregates information from the neighbourhood, so there is redundancy between adjacent features. The sampling method in this paper is carried out in the spatial and feature domains, reducing the computational resources for registration. The proposed method preserves more effective information compared to other algorithms. Points are only considered on the same ring in LiDAR data as neighbouring points, eliminating the need for additional neighbouring point search. This makes it efficient and suitable for large-scale outdoor LiDAR point cloud registration. image

Keyword :

artificial intelligence artificial intelligence robot vision robot vision signal processing signal processing SLAM (robots) SLAM (robots)

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ou, Wengjun , Zheng, Mingkui , Zheng, Haifeng . MDU-sampling: Multi-domain uniform sampling method for large-scale outdoor LiDAR point cloud registration [J]. | ELECTRONICS LETTERS , 2024 , 60 (5) .
MLA Ou, Wengjun et al. "MDU-sampling: Multi-domain uniform sampling method for large-scale outdoor LiDAR point cloud registration" . | ELECTRONICS LETTERS 60 . 5 (2024) .
APA Ou, Wengjun , Zheng, Mingkui , Zheng, Haifeng . MDU-sampling: Multi-domain uniform sampling method for large-scale outdoor LiDAR point cloud registration . | ELECTRONICS LETTERS , 2024 , 60 (5) .
Export to NoteExpress RIS BibTex

Version :

MIRNet-Plus:基于丰富特征学习的低光图像增强改进方法 CSCD PKU
期刊论文 | 2024 , 45 (03) , 664-669 | 小型微型计算机系统
Abstract&Keyword Cite

Abstract :

图像增强是一种基础的计算机视觉任务,从低光图像中恢复出高质量的明亮图像是业界正在攻克的问题.近年来,以卷积神经网络(CNN)为主导的图像恢复技术取得了重大进展.对于低光图像增强,本方法使用双重选择核融合(Double SKFF)方法,通过增强中间层不同分辨率信息的交流能力以获得更多上下文信息以及空间信息;同时设计了一个深度注意模块(Depthwise Attention Module, DWM),用来共享张量中的特征信息,对原有特征进行补充,获取更加丰富的特征信息.同时本方法还引入了多颜色空间神经修饰块,用来在3种不同的颜色空间(Lab, RGB,HSV)中联合训练,以期望获得更好的图像增强结果.本文提出的MIRNet-Plus在原有的基础方法上PSNR获得了5.3%的提高,由23.73dB提升到24.98dB.

Keyword :

低光图像 低光图像 卷积神经网络 卷积神经网络 图像增强 图像增强 多颜色空间神经修饰块 多颜色空间神经修饰块 深度注意模块 深度注意模块

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 罗林 , 余联想 , 郑明魁 . MIRNet-Plus:基于丰富特征学习的低光图像增强改进方法 [J]. | 小型微型计算机系统 , 2024 , 45 (03) : 664-669 .
MLA 罗林 et al. "MIRNet-Plus:基于丰富特征学习的低光图像增强改进方法" . | 小型微型计算机系统 45 . 03 (2024) : 664-669 .
APA 罗林 , 余联想 , 郑明魁 . MIRNet-Plus:基于丰富特征学习的低光图像增强改进方法 . | 小型微型计算机系统 , 2024 , 45 (03) , 664-669 .
Export to NoteExpress RIS BibTex

Version :

RTONet: Real-Time Occupancy Network for Semantic Scene Completion Scopus
期刊论文 | 2024 , 9 (10) , 1-8 | IEEE Robotics and Automation Letters
Abstract&Keyword Cite

Abstract :

The comprehension of 3D semantic scenes holds paramount significance in autonomous driving and robotics technology. Nevertheless, the simultaneous achievement of real-time processing and high precision in complex, expansive outdoor environments poses a formidable challenge. In response to this challenge, we propose a novel occupancy network named RTONet, which is built on a teacher-student model. To enhance the ability of the network to recognize various objects, the decoder incorporates dilated convolution layers with different receptive fields and utilizes a multi-path structure. Furthermore, we develop an automatic frame selection algorithm to augment the guidance capability of the teacher network. The proposed method outperforms the existing grid-based approaches in semantic completion (mIoU), and achieves the state-of-the-art performance in terms of real-time inference speed while exhibiting competitive performance in scene completion (IoU) on the SemanticKITTI benchmark. IEEE

Keyword :

Decoding Decoding Deep Learning for Visual Perception Deep Learning for Visual Perception Feature extraction Feature extraction Laser radar Laser radar LiDAR LiDAR Mapping Mapping Occupancy Grid Occupancy Grid Point cloud compression Point cloud compression Real-time systems Real-time systems Semantics Semantics Semantic Scene Understanding Semantic Scene Understanding Three-dimensional displays Three-dimensional displays

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lai, Q. , Zheng, H. , Feng, X. et al. RTONet: Real-Time Occupancy Network for Semantic Scene Completion [J]. | IEEE Robotics and Automation Letters , 2024 , 9 (10) : 1-8 .
MLA Lai, Q. et al. "RTONet: Real-Time Occupancy Network for Semantic Scene Completion" . | IEEE Robotics and Automation Letters 9 . 10 (2024) : 1-8 .
APA Lai, Q. , Zheng, H. , Feng, X. , Zheng, M. , Chen, H. , Chen, W. . RTONet: Real-Time Occupancy Network for Semantic Scene Completion . | IEEE Robotics and Automation Letters , 2024 , 9 (10) , 1-8 .
Export to NoteExpress RIS BibTex

Version :

Camera Pose-Based Background Modeling for Video Coding in Moving Cameras SCIE
期刊论文 | 2024 , 34 (5) , 4054-4069 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Abstract&Keyword Cite

Abstract :

For moving cameras, the video content changes significantly, which leads to inaccurate prediction in traditional inter prediction and results in limited compression efficiency. To solve these problems, first, we propose a camera pose-based background modeling (CP-BM) framework that uses the camera motion and the textures of reconstructed frames to model the background of the current frame. Compared with the reconstructed frames, the predicted background frame generated by CP-BM is more geometrically similar to the current frame in position and is more strongly correlated with it at the pixel level; thus, it can serve as a higher-quality reference for inter prediction, and the compression efficiency can be improved. Second, to compensate the motion of the background pixels, we construct a pixel-level motion vector field that can accurately describe various complex motions with only a small overhead. Our method is more general than other motion models because it has more degrees of freedom, and when the degrees of freedom are decreased, it encompasses other motion models as special cases. Third, we propose an optical flow-based depth estimation (OF-DE) method to synchronize the depth information at the codec, which is used to build the motion vector field. Finally, we integrate the overall scheme into the High Efficiency Video Coding (HEVC) and Versatile Video Coding (VVC) reference software HM-16.7 and VTM-10.0. Experimental results demonstrate that in HM-16.7, for in-vehicle video sequences, our solution has an average Bj & oslash;ntegaard delta bit rate (BD-rate) gain of 8.02% and reduces the encoding time by 20.9% due to the superiority of our scheme in motion estimation. Moreover, in VTM-10.0 with affine motion compensation (MC) turned off and turned on, our method has average BD-rate gains of 5.68% and 0.56%, respectively.

Keyword :

background modeling background modeling Bit rate Bit rate camera pose camera pose Cameras Cameras Computational modeling Computational modeling Encoding Encoding Estimation Estimation moving cameras moving cameras Predictive models Predictive models Video coding Video coding

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fang, Zheng , Zheng, Mingkui , Chen, Pingping et al. Camera Pose-Based Background Modeling for Video Coding in Moving Cameras [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (5) : 4054-4069 .
MLA Fang, Zheng et al. "Camera Pose-Based Background Modeling for Video Coding in Moving Cameras" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34 . 5 (2024) : 4054-4069 .
APA Fang, Zheng , Zheng, Mingkui , Chen, Pingping , Chen, Zhifeng , Oliver Wu, Dapeng . Camera Pose-Based Background Modeling for Video Coding in Moving Cameras . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (5) , 4054-4069 .
Export to NoteExpress RIS BibTex

Version :

一种基于位深度变换的无线电频谱数据压缩方法
期刊论文 | 2024 , 4 (04) , 142-145 | 信息技术与信息化
Abstract&Keyword Cite

Abstract :

电磁频谱数据是刻画电磁频谱态势的量化数字集合,频谱数据具有空间相关性、时间相关性以及频率相关性,在局部范围内的频谱数据和相邻频率上的数据存在冗余。针对上述问题,提出了一种基于位深度变换的无线电频谱数据压缩方法,首先将未经处理的无线电频谱监测数据经过数据转换模块转换成可便于后续压缩的频谱子图,然后采用传统编码方法压缩最高有效字节(HSB)频谱子图,最后设计了一种自回归熵模型来实现最低有效字节(LSB)频谱子图的高性能压缩并引入波前并行操作来加快解码速率。实验结果表明,其压缩率为39.05%,与已有的HUffman等传统无损压缩方法和无损图像压缩方法相比,所提出的方法在保证数据准确性的同时有着更好的压缩性能。

Keyword :

图像编码 图像编码 无损压缩 无损压缩 神经网络 神经网络 频谱数据压缩 频谱数据压缩

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 张承琰 , 郑明魁 , 吴孔贤 et al. 一种基于位深度变换的无线电频谱数据压缩方法 [J]. | 信息技术与信息化 , 2024 , 4 (04) : 142-145 .
MLA 张承琰 et al. "一种基于位深度变换的无线电频谱数据压缩方法" . | 信息技术与信息化 4 . 04 (2024) : 142-145 .
APA 张承琰 , 郑明魁 , 吴孔贤 , 刘会明 . 一种基于位深度变换的无线电频谱数据压缩方法 . | 信息技术与信息化 , 2024 , 4 (04) , 142-145 .
Export to NoteExpress RIS BibTex

Version :

基于位姿状态的激光雷达点云帧间编码方法
期刊论文 | 2024 , 4 (04) , 126-129 | 信息技术与信息化
Abstract&Keyword Cite

Abstract :

激光雷达动态获取点云压缩在智能驾驶领域具有重要应用。为了应对点云序列时域冗余问题,本文设计了基于位姿状态的激光雷达点云帧间编码方法。在动态获取点云场景中,点云在三维空间上分布广泛且稀疏。对此,将其映射到二维距离图上。并在此基础上,结合激光雷达位姿关系,提出了一种高效的帧间预测编码方法,以消除时域冗余。由于三维物体遮挡的原因,激光雷达运动时易导致预测距离图产生空洞现象,影响预测编码性能。采用空洞填补方法可提高算法预测精度,并对预测残差进行量化和压缩。实验结果表明,相较于G-PCC等编码方法,所提出的方法在编码性能方面表现更为优越。

Keyword :

激光雷达动态获取点云 激光雷达动态获取点云 点云压缩 点云压缩 空洞填补 空洞填补 距离图 距离图

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李少良 , 郑明魁 , 石元龙 et al. 基于位姿状态的激光雷达点云帧间编码方法 [J]. | 信息技术与信息化 , 2024 , 4 (04) : 126-129 .
MLA 李少良 et al. "基于位姿状态的激光雷达点云帧间编码方法" . | 信息技术与信息化 4 . 04 (2024) : 126-129 .
APA 李少良 , 郑明魁 , 石元龙 , 张承琰 . 基于位姿状态的激光雷达点云帧间编码方法 . | 信息技术与信息化 , 2024 , 4 (04) , 126-129 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 9 >

Export

Results:

Selected

to

Format:
Online/Total:159/9274162
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1