Query:
学者姓名:于元隆
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
目的 针对科研项目实施过程中的任务调度优化、任务数据实时共享、工效分析及其制约因素挖掘等需求,设计一种大数据驱动的科研过程评估分析系统HTower.方法 系统以任务为基本单元进行项目调度与实施过程跟踪分析,通过项目任务执行方案与结果等内容的在线协同编辑,解决科研过程数据协同更新与共享难的问题,减少因频繁会议和即时消息通讯所浪费的科研时间.依据工时、工效、任务进度等科研过程量化评估结果,挖掘制约科研项目实施效率和个人科研效率提升的关键因素,指导项目负责人和科研人员提高科研效率和项目实施质量.结果 HTower系统可将项目任务完成的提前率提升到 10.8%,准点率达到 79.5%,既能通过量化评估分析制约科研项目实施效率的相关因素,也能用于研究生科研过程的量化评估.结论 HTower系统的应用不仅可提高项目任务调度水平和团队科研效率,而且可及时发现研究生科研效率较低的原因,优化导师的学术指导策略,促进研究生科研能力的提升.
Keyword :
任务调度 任务调度 协同编辑 协同编辑 大数据 大数据 科研过程评估 科研过程评估 量化评估 量化评估
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 廖龙龙 , 曾文滨 , 方鑫 et al. 大数据驱动的科研过程评估分析系统 [J]. | 河南科技学院学报(自然科学版) , 2025 , 53 (3) : 70-79 . |
MLA | 廖龙龙 et al. "大数据驱动的科研过程评估分析系统" . | 河南科技学院学报(自然科学版) 53 . 3 (2025) : 70-79 . |
APA | 廖龙龙 , 曾文滨 , 方鑫 , 郑志伟 , 于元隆 . 大数据驱动的科研过程评估分析系统 . | 河南科技学院学报(自然科学版) , 2025 , 53 (3) , 70-79 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Faced with the wide-scale characteristics of objects in optical remote sensing images, the current object detection models are always unable to provide satisfactory detection capabilities for remote sensing tasks. To achieve better wide-scale coverage for various remote sensing regions of interest, this article introduces a multiprediction mechanism to build a novel region generation model, namely, a multiple region proposal experts network (MRPENet). Meanwhile, to achieve both region proposal coverage and receptive field coverage of wide-scale objects, we constructed a prior design of an anchor (PDA) module and an adaptive features compensation (AFC) module to achieve the coverage of wide-scale remote sensing objects. To better utilize the multiexpert characteristics of our model, we customized a new training sample allocation strategy, dynamic scale-assigned expert learning (DSAEL), to cultivate the ability of experts to deal with objects at various scales. To the best of our knowledge, this is the first time that a multiple region proposal network (RPN) mechanism has been used in the object detection of optical remote sensing images. Extensive experiments have shown the generality and effectiveness of our MRPENet. Without bells and whistles, MRPENet achieves a new state-of-the-art (SOTA) on standard benchmarks, i.e., DOTA-v1.0 [82.02% mean average precision (mAP)], HRSC2016 (98.16% mAP), and FAIR1M-v1.0 (48.80% mAP).
Keyword :
Adaptation models Adaptation models Adaptive features compensation (AFC) Adaptive features compensation (AFC) Adaptive systems Adaptive systems Detectors Detectors dynamic scale-assigned expert learning (DSAEL) dynamic scale-assigned expert learning (DSAEL) Feature extraction Feature extraction multi-prediction mechanism multi-prediction mechanism object detection object detection Object detection Object detection Optical imaging Optical imaging Proposals Proposals remote sensing remote sensing Remote sensing Remote sensing Semantics Semantics Training Training wide-scale coverage wide-scale coverage
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Qifeng , Huang, Haibin , Zhu, Daoye et al. Multiple Region Proposal Experts Network for Wide-Scale Remote Sensing Object Detection [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
MLA | Lin, Qifeng et al. "Multiple Region Proposal Experts Network for Wide-Scale Remote Sensing Object Detection" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) . |
APA | Lin, Qifeng , Huang, Haibin , Zhu, Daoye , Chen, Nuo , Fu, Gang , Yu, Yuanlong . Multiple Region Proposal Experts Network for Wide-Scale Remote Sensing Object Detection . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Multi-view clustering has attracted significant attention in recent years because it can leverage the consistent and complementary information of multiple views to improve clustering performance. However, effectively fuse the information and balance the consistent and complementary information of multiple views are common challenges faced by multi-view clustering. Most existing multi-view fusion works focus on weighted-sum fusion and concatenating fusion, which unable to fully fuse the underlying information, and not consider balancing the consistent and complementary information of multiple views. To this end, we propose Cross-view Fusion for Multi-view Clustering (CFMVC). Specifically, CFMVC combines deep neural network and graph convolutional network for cross-view information fusion, which fully fuses feature information and structural information of multiple views. In order to balance the consistent and complementary information of multiple views, CFMVC enhances the correlation among the same samples to maximize the consistent information while simultaneously reinforcing the independence among different samples to maximize the complementary information. Experimental results on several multi-view datasets demonstrate the effectiveness of CFMVC for multi-view clustering task.
Keyword :
Cross-view Cross-view deep neural network deep neural network graph convolutional network graph convolutional network multi-view clustering multi-view clustering multi-view fusion multi-view fusion
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Zhijie , Huang, Binqiang , Zheng, Qinghai et al. Cross-View Fusion for Multi-View Clustering [J]. | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 : 621-625 . |
MLA | Huang, Zhijie et al. "Cross-View Fusion for Multi-View Clustering" . | IEEE SIGNAL PROCESSING LETTERS 32 (2025) : 621-625 . |
APA | Huang, Zhijie , Huang, Binqiang , Zheng, Qinghai , Yu, Yuanlong . Cross-View Fusion for Multi-View Clustering . | IEEE SIGNAL PROCESSING LETTERS , 2025 , 32 , 621-625 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
For objects with arbitrary angles in optical remote sensing (RS) images, the oriented bounding box regression task often faces the problem of ambiguous boundaries between positive and negative samples. The statistical analysis of existing label assignment strategies reveals that anchors with low Intersection over Union (IoU) between ground truth (GT) may also accurately surround the GT after decoding. Therefore, this article proposes an attention-based mean-max balance assignment (AMMBA) strategy, which consists of two parts: mean-max balance assignment (MMBA) strategy and balance feature pyramid with attention (BFPA). MMBA employs the mean-max assignment (MMA) and balance assignment (BA) to dynamically calculate a positive threshold and adaptively match better positive samples for each GT for training. Meanwhile, to meet the need of MMBA for more accurate feature maps, we construct a BFPA module that integrates spatial and scale attention mechanisms to promote global information propagation. Combined with S2ANet, our AMMBA method can effectively achieve state-of-the-art performance, with a precision of 80.91% on the DOTA dataset in a simple plug-and-play fashion. Extensive experiments on three challenging optical RS image datasets (DOTA-v1.0, HRSC, and DIOR-R) further demonstrate the balance between precision and speed in single-stage object detectors. Our AMMBA has enough potential to assist all existing RS models in a simple way to achieve better detection performance. The code is available at https://github.com/promisekoloer/AMMBA.
Keyword :
Accuracy Accuracy Attention feature fusion Attention feature fusion Detectors Detectors Feature extraction Feature extraction label assignment label assignment Location awareness Location awareness Object detection Object detection optical remote sensing (RS) images optical remote sensing (RS) images Optical scattering Optical scattering oriented object detection oriented object detection Remote sensing Remote sensing Semantics Semantics Shape Shape Training Training
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Qifeng , Chen, Nuo , Huang, Haibin et al. Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
MLA | Lin, Qifeng et al. "Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) . |
APA | Lin, Qifeng , Chen, Nuo , Huang, Haibin , Zhu, Daoye , Fu, Gang , Chen, Chuanxi et al. Attention-Based Mean-Max Balance Assignment for Oriented Object Detection in Optical Remote Sensing Images . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In real-world scenarios, missing views is common due to the complexity of data collection. Therefore, it is inevitable to classify incomplete multi-view data. Although substantial progress has been achieved, there are still two challenging problems with incomplete multi-view classification: (1) Simply ignoring these missing views is often ineffective, especially under high missing rates, which can lead to incomplete analysis and unreliable results. (2) Most existing multi-view classification models primarily focus on maximizing consistency between different views. However, neglecting specific-view information may lead to decreased performance. To solve the above problems, we propose a novel framework called Trusted Cross-View Completion (TCVC) for incomplete multi-view classification. Specifically, TCVC consists of three modules: Cross-view Feature Learning Module (CVFL), Imputation Module (IM) and Trusted Fusion Module (TFM). First, CVFL mines specific- view information to obtain cross-view reconstruction features. Then, IM restores the missing view by fusing cross-view reconstruction features with weights, guided by uncertainty-aware information. This information is the quality assessment of the cross-view reconstruction features in TFM. Moreover, the recovered views are supervised by cross-view neighborhood-aware. Finally, TFM effectively fuses complete data to generate trusted classification predictions. Extensive experiments show that our method is effective and robust.
Keyword :
Cross-view feature learning Cross-view feature learning Incomplete multi-view classification Incomplete multi-view classification Uncertainty-aware Uncertainty-aware
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhou, Liping , Chen, Shiyun , Song, Peihuan et al. Trusted Cross-view Completion for incomplete multi-view classification [J]. | NEUROCOMPUTING , 2025 , 629 . |
MLA | Zhou, Liping et al. "Trusted Cross-view Completion for incomplete multi-view classification" . | NEUROCOMPUTING 629 (2025) . |
APA | Zhou, Liping , Chen, Shiyun , Song, Peihuan , Zheng, Qinghai , Yu, Yuanlong . Trusted Cross-view Completion for incomplete multi-view classification . | NEUROCOMPUTING , 2025 , 629 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
We tackle the problem of single-shape 3D generation, aiming to synthesize diverse and plausible shapes conditioned on a single input exemplar. This task is challenging due to the absence of dataset-level variation, requiring models to internalize structural patterns and generate novel shapes from limited local geometric cues. To address this, we propose a unified framework combining geometry-aware representation learning with a multiscale diffusion process. Our approach centers on a triplane autoencoder enhanced with a spatial pattern predictor and attention-based feature fusion, enabling fine-grained perception of local structures. To preserve structural coherence during generation, we introduce a soft feature distribution alignment loss that aligns features between input and generated shapes, balancing fidelity and diversity. Finally, we adopt a hierarchical diffusion strategy that progressively refines triplane features from coarse to fine, stabilizing training and improving quality. Extensive experiments demonstrate that our method produces high-fidelity, structurally consistent, and diverse shapes, establishing a strong baseline for single-shape generation.
Keyword :
3D representation 3D representation Diffusion model Diffusion model Shape generation Shape generation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Weng, Hongliang , Zheng, Qinghai , Yu, Yuanlong et al. Geometry-aware triplane diffusion for single shape generation with feature alignment [J]. | COMPUTERS & GRAPHICS-UK , 2025 , 132 . |
MLA | Weng, Hongliang et al. "Geometry-aware triplane diffusion for single shape generation with feature alignment" . | COMPUTERS & GRAPHICS-UK 132 (2025) . |
APA | Weng, Hongliang , Zheng, Qinghai , Yu, Yuanlong , Zhuang, Yixin . Geometry-aware triplane diffusion for single shape generation with feature alignment . | COMPUTERS & GRAPHICS-UK , 2025 , 132 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Facial beauty prediction (FBP) aims to develop a system to assess facial attractiveness automatically. Through prior research and our own observations, it has become evident that attribute information, such as gender and race, is a key factor leading to the distribution discrepancy in the FBP data. Such distribution discrepancy hinders current conventional FBP models from generalizing effectively to unseen attribute domain data, thereby discounting further performance improvement. To address this problem, in this paper, we exploit the attribute information to guide the training of convolutional neural networks (CNNs), with the final purpose of implicit feature alignment across various attribute domain data. To this end, we introduce the attribute information into convolution layer and batch normalization (BN) layer, respectively, as they are the most crucial parts for representation learning in CNNs. Specifically, our method includes: 1) Attribute -guided convolution (AgConv) that dynamically updates convolutional filters based on attributes by parameter tuning or parameter rebirth; 2) Attribute -guided batch normalization (AgBN) is developed to compute the attribute -specific statistics through an attribute guided batch sampling strategy; 3) To benefit from both approaches, we construct an integrated framework by combining AgConv and AgBN to achieve a more thorough feature alignment across different attribute domains. Extensive qualitative and quantitative experiments have been conducted on the SCUTFBP, SCUT-FBP5500 and HotOrNot benchmark datasets. The results show that AgConv significantly improves the attribute -guided representation learning capacity and AgBN provides more stable optimization. Owing to the combination of AgConv and AgBN, the proposed framework (Ag-Net) achieves further performance improvement and is superior to other state-of-the-art approaches for FBP.
Keyword :
Batch normalization Batch normalization Dynamic convolution Dynamic convolution Facial attractiveness assessment Facial attractiveness assessment Facial beauty prediction Facial beauty prediction
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Sun, Zhishu , Lin, Luojun , Yu, Yuanlong et al. Learning feature alignment across attribute domains for improving facial beauty prediction [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 249 . |
MLA | Sun, Zhishu et al. "Learning feature alignment across attribute domains for improving facial beauty prediction" . | EXPERT SYSTEMS WITH APPLICATIONS 249 (2024) . |
APA | Sun, Zhishu , Lin, Luojun , Yu, Yuanlong , Jin, Lianwen . Learning feature alignment across attribute domains for improving facial beauty prediction . | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 249 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Facial Beauty Prediction (FBP) is a significant pattern recognition task that aims to achieve consistent facial attractiveness assessment with human perception. Currently, Convolutional Neural Networks (CNNs) have become the mainstream method for FBP. The training objective of most conventional CNNs is usually to learn static convolution kernels, which, however, makes the network quite difficult to capture global attentive information, and thus usually ignores the key facial regions, e.g., eyes, and nose. To tackle this problem, we devise a new convolution manner, Dynamic Attentive Convolution (DyAttenConv), which integrates the dynamic and attention mechanism into convolution in kernel -level, with the aim of enforcing the convolution kernels adapted to each face dynamically. DyAttenConv is a plug -and -play module that can be flexibly combined with existing CNN architectures, making the acquisition of the beauty -related features more globally and attentively. Extensive ablation studies show that our method is superior to other fusion and attention mechanisms, and the comparison with other state -of -the -arts also demonstrates the effectiveness of DyAttenConv on facial beauty prediction task.
Keyword :
dynamic convolution dynamic convolution facial beauty prediction facial beauty prediction kernel attention kernel attention
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Sun, Zhishu , Xiao, Zilong , Yu, Yuanlong et al. Dynamic Attentive Convolution for Facial Beauty Prediction [J]. | IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS , 2024 , E107 (2) : 239-243 . |
MLA | Sun, Zhishu et al. "Dynamic Attentive Convolution for Facial Beauty Prediction" . | IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS E107 . 2 (2024) : 239-243 . |
APA | Sun, Zhishu , Xiao, Zilong , Yu, Yuanlong , Lin, Luojun . Dynamic Attentive Convolution for Facial Beauty Prediction . | IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS , 2024 , E107 (2) , 239-243 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Domain Generalization (DG) aims to generalize a model trained on multiple source domains to an unseen target domain. The source domains always require precise annotations, which can be cumbersome or even infeasible to obtain in practice due to the vast amount of data involved. Web data, namely web -crawled images, offers an opportunity to access large amounts of unlabeled images with rich style information, which can be leveraged to improve DG. From this perspective, we introduce a novel paradigm of DG, termed as Semi -Supervised Domain Generalization (SSDG), to explore how the labeled and unlabeled source domains can interact, and establish two settings, including the close -set and open -set SSDG. The close -set SSDG is based on existing public DG datasets, while the open -set SSDG, built on the newly -collected web -crawled datasets, presents a novel yet realistic challenge that pushes the limits of current technologies. A natural approach of SSDG is to transfer knowledge from labeled data to unlabeled data via pseudo labeling, and train the model on both labeled and pseudo -labeled data for generalization. Since there are conflicting goals between domain -oriented pseudo labeling and out -of -domain generalization, we develop a pseudo labeling phase and a generalization phase independently for SSDG. Unfortunately, due to the large domain gap, the pseudo labels provided in the pseudo labeling phase inevitably contain noise, which has negative affect on the subsequent generalization phase. Therefore, to improve the quality of pseudo labels and further enhance generalizability, we propose a cyclic learning framework to encourage a positive feedback between these two phases, utilizing an evolving intermediate domain that bridges the labeled and unlabeled domains in a curriculum learning manner. Extensive experiments are conducted to validate the effectiveness of our method. It is worth highlighting that web -crawled images can promote domain generalization as demonstrated by the experimental results.
Keyword :
Domain generalization Domain generalization Semi-supervised learning Semi-supervised learning Transfer learning Transfer learning Unsupervised domain adaptation Unsupervised domain adaptation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Luojun , Xie, Han , Sun, Zhishu et al. Semi-supervised domain generalization with evolving intermediate domain [J]. | PATTERN RECOGNITION , 2024 , 149 . |
MLA | Lin, Luojun et al. "Semi-supervised domain generalization with evolving intermediate domain" . | PATTERN RECOGNITION 149 (2024) . |
APA | Lin, Luojun , Xie, Han , Sun, Zhishu , Chen, Weijie , Liu, Wenxi , Yu, Yuanlong et al. Semi-supervised domain generalization with evolving intermediate domain . | PATTERN RECOGNITION , 2024 , 149 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
为实现实验室科研管理过程中的成员工时和工效分析、任务分配的合理性评估等需求,研究一种基于摄像头视频、考勤机记录、Web系统记录等的多模态工效分析系统MASRE.该系统通过实验室科研人员工时及其玩手机行为导致的无效工时、工效实时对比与展示,激励实验室成员投入更多的时间开展学术研究.依据系统计算的工效变化趋势,实验室负责人可分析科研任务分配的合理性,科研人员也可分析影响其科研效率的因素.MASRE系统由负责工时工效统计的Web系统模块和支持无效工时自动识别的AI分析模块构成,采用PyTorch、VUE 3、MySQL等技术实现.以该系统研发及其研究报告撰写的工时工效分析为例进行实验分析,结果表明MASRE系统可有效识别无效工时并进行工时统计与工效分析.同时,该系统已免费向实验室研究团队开放申请注册使用,网址为 .
Keyword :
任务分配 任务分配 多模态采样 多模态采样 检测方法 检测方法 注意力机制 注意力机制 玩手机行为识别 玩手机行为识别 科研团队 科研团队 科研工效分析 科研工效分析
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 廖龙龙 , 郑志伟 , 张煜朋 et al. 基于多模态的实验室科研工效分析系统 [J]. | 计算机系统应用 , 2024 , 33 (1) : 68-75 . |
MLA | 廖龙龙 et al. "基于多模态的实验室科研工效分析系统" . | 计算机系统应用 33 . 1 (2024) : 68-75 . |
APA | 廖龙龙 , 郑志伟 , 张煜朋 , 方鑫 , 郑育强 , XIONG Ning et al. 基于多模态的实验室科研工效分析系统 . | 计算机系统应用 , 2024 , 33 (1) , 68-75 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |