Query:
学者姓名:陈晓云
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
Community detection is an important tool for analyzing and understanding large-scale complex networks. It can divide the network nodes into multiple communities, which have dense intra-community connections and sparse inter-community connections. Traditional community detection algorithms focus on non-attributed networks that contain only topological structures and ignore the attribute information on the nodes. Dual-channel attribute network community detection model optimizes the topology and attribute information as two channels, which can make full use of the two types of information and improve the clustering accuracy of the model. As a classical mathematical method of community detection, non-negative matrix factorization is only suitable for linear data, and cannot mine the nonlinear latent structural features. To address these limitations, this paper proposes a dual-channel attributed graph community detection algorithm based on kernel matrix factorization(KDACD). The nonlinear relations between nodes are learned by using kernel trick which projecting attribute features of nodes into high-dimensional Hilbert Spaces, and the robustness of the model is improved by sparse constraints and manifold regularization terms. Extensive experiments on 6 real-world datasets verify the effectiveness of the algorithm.
Keyword :
Attributed graph community detection Attributed graph community detection kernel trick kernel trick manifold regularization manifold regularization non-negative matrix factorization non-negative matrix factorization unsupervised learning unsupervised learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zheng, Zhiwen , Chen, Xiaoyun , Lin, Xinyi . Kernel Based Dual-Channel Attributed Graph Community Detection [J]. | IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING , 2024 , 11 (1) : 592-603 . |
MLA | Zheng, Zhiwen 等. "Kernel Based Dual-Channel Attributed Graph Community Detection" . | IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING 11 . 1 (2024) : 592-603 . |
APA | Zheng, Zhiwen , Chen, Xiaoyun , Lin, Xinyi . Kernel Based Dual-Channel Attributed Graph Community Detection . | IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING , 2024 , 11 (1) , 592-603 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
As an important technology for recommendation system, attributed graph clustering has received extensive attention recently. Attributed graph clustering methods based on graph neural networks are the mainstream methods. However, their assumption that the attributes of adjacent nodes are similar is often not satisfied in the real world, which influences the clustering performance. To this end, this paper proposes an attributed graph subspace clustering algorithm with residual compensation guided by adaptive dual manifold regularization (ADMRGC). On the basis of the low rank representation subspace clustering (LRR) model, ADMRGC introduces attributed manifold regularization, topological manifold regularization and residual compensation, which make ADMRGC possible to utilize attributed similarity and topological similarity at the same time, thus solving the problem that traditional subspace clustering only considers attributed information. In addition, ADMRGC balances the contribution of node attributes and topology by adaptively weighting the dual manifold regularization, and uses the residual representation matrix to describe the difference between node attributed similarity and topological neighbor relationship. Therefore, ADMRGC can avoid the limitation of graph neural network models assuming that the attributes of adjacent nodes are similar. Experimental results on 7 public graph datasets show that ADMRGC can achieve the best clustering performance on both high-homophily and low-homophily datasets. Especially, for datasets with low homophily, ADMRGC as a shallow model can also achieve better clustering performance than state-of-the-art deep neural network models. © 2024
Keyword :
Clustering algorithms Clustering algorithms Deep neural networks Deep neural networks Graph neural networks Graph neural networks Neural network models Neural network models
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Yan , Chen, Xiaoyun . Attributed graph subspace clustering with residual compensation guided by adaptive dual manifold regularization [J]. | Expert Systems with Applications , 2024 , 255 . |
MLA | Li, Yan 等. "Attributed graph subspace clustering with residual compensation guided by adaptive dual manifold regularization" . | Expert Systems with Applications 255 (2024) . |
APA | Li, Yan , Chen, Xiaoyun . Attributed graph subspace clustering with residual compensation guided by adaptive dual manifold regularization . | Expert Systems with Applications , 2024 , 255 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Subspace clustering model based on self-representation learning often use l1,l2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1, \ell _2$$\end{document} or kernel norm to constrain self-representation matrix of the dataset. In theory, l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document} norm can constrain the independence of subspaces, but which may lead to under-connection because the sparsity of the self-representation matrix. l2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _2$$\end{document} and nuclear norm regularization can improve the connectivity between clusters, but which may lead to over-connection of the self-representation matrix. Because a single regularization term may cause subspaces to be over or insufficiently divided, this paper proposes an elastic deep sparse self-representation subspace clustering network (EDS-SC), which imposes sparse constraints on deep features, and introduces the elastic network regularization mixed l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document} and l2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _2$$\end{document} norm to constraint self-representation matrix. The network can extract deep sparse features and provide a balance between subspace independence and connectivity. Experiments on human faces, objects, and medical imaging datasets prove the effectiveness of EDS-SC network.
Keyword :
Deep auto-encoder Deep auto-encoder Deep sparse feature Deep sparse feature Elastic net regularization Elastic net regularization Representation learning Representation learning Subspace clustering Subspace clustering
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, Qiaoping , Chen, Xiaoyun , Li, Yan et al. Elastic Deep Sparse Self-Representation Subspace Clustering Network [J]. | NEURAL PROCESSING LETTERS , 2024 , 56 (2) . |
MLA | Wang, Qiaoping et al. "Elastic Deep Sparse Self-Representation Subspace Clustering Network" . | NEURAL PROCESSING LETTERS 56 . 2 (2024) . |
APA | Wang, Qiaoping , Chen, Xiaoyun , Li, Yan , Lin, Yanming . Elastic Deep Sparse Self-Representation Subspace Clustering Network . | NEURAL PROCESSING LETTERS , 2024 , 56 (2) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
点云数据的非结构化和不均匀分布给点云物体特征表示和分类任务带来极大挑战。为了提取点云物体的三维结构特征,现有方法多采用复杂的局部特征提取结构组建分层网络,导致特征提取网络复杂且主要关注点云物体的局部结构。为更好地提取不均匀分布的点云物体特征,提出采样点卷积密度自适应加权的节点结构网络(NsNet)。该卷积网络通过高斯密度对采样点自适应加权以区分采样点的密度差异,从而更好地刻画物体的整体结构;其次,通过加入球形坐标简化网络结构以降低模型复杂度。在3个公开数据集上与PointNet++和PointMLP等方法进行比较,实验结果表明:基于自适应密度加权的NsNet比PointNet++和PointMLP的总准确率(OA)分别提高了9.1和1.3个百分点;与PointMLP相比减少了4.6×10~6的参数量。NsNet可有效解决点云分布不均导致的边缘点信息损失问题,提高分类精度,降低模型复杂度。
Keyword :
三维点云 三维点云 卷积神经网络 卷积神经网络 密度权重 密度权重 局部结构 局部结构 整体结构 整体结构 点云分类 点云分类
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 高文烁 , 陈晓云 . 基于节点结构的点云分类网络 [J]. | 计算机应用 , 2024 , 44 (05) : 1471-1478 . |
MLA | 高文烁 et al. "基于节点结构的点云分类网络" . | 计算机应用 44 . 05 (2024) : 1471-1478 . |
APA | 高文烁 , 陈晓云 . 基于节点结构的点云分类网络 . | 计算机应用 , 2024 , 44 (05) , 1471-1478 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Link prediction aims to predict missing links or eliminate spurious links by employing known complex network information. As an unsupervised linear feature representation method, matrix factorization (MF)-based autoencoder (AE) can project the high-dimensional data matrix into the low-dimensional latent space. However, most of the traditional link prediction methods based on MF or AE adopt shallow models and single adjacency matrices, which cannot adequately learn and represent network features and are susceptible to noise. In addition, because some methods require the input of symmetric data matrix, they can only be used in undirected networks. Therefore, we propose a deep manifold matrix factorization autoencoder model using global connectivity matrix, called DM-MFAE-G. The model utilizes PageRank algorithm to get the global connectivity matrix between nodes for the complex network. DM-MFAE-G performs deep matrix factorization on the local adjacency matrix and global connectivity matrix, respectively, to obtain global and local multi-layer feature representations, which contains the rich structural information. In this paper, the model is solved by alternating iterative optimization method, and the convergence of the algorithm is proved. Comprehensive experiments on different real networks demonstrate that the global connectivity matrix and manifold constraints introduced by DM-MFAE-G significantly improve the link prediction performance on directed and undirected networks.
Keyword :
Autoencoder Autoencoder Deep matrix factorization Deep matrix factorization Graph representation learning Graph representation learning Link prediction Link prediction Manifold regularization Manifold regularization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Xinyi , Chen, Xiaoyun , Zheng, Zhiwen . Deep manifold matrix factorization autoencoder using global connectivity for link prediction [J]. | APPLIED INTELLIGENCE , 2023 , 53 (21) : 25816-25835 . |
MLA | Lin, Xinyi et al. "Deep manifold matrix factorization autoencoder using global connectivity for link prediction" . | APPLIED INTELLIGENCE 53 . 21 (2023) : 25816-25835 . |
APA | Lin, Xinyi , Chen, Xiaoyun , Zheng, Zhiwen . Deep manifold matrix factorization autoencoder using global connectivity for link prediction . | APPLIED INTELLIGENCE , 2023 , 53 (21) , 25816-25835 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The purpose of graph embedding is to encode the known node features and topological information of graph into low-dimensional embeddings for further downstream learning tasks. Graph autoencoders can aggregate graph topology and node features, but it is highly dependent on the gradient descent optimizer with a large iterative learning time, and susceptible to local optimal solutions. Thus, we propose Graph Convolutional Extreme Learning Machine Autoencoder. To address the limitation that the extreme learning machine autoencoder cannot use topological information, the graph convolution operation is introduced between the input layer and the hidden layer to improve the representation ability of the graph embedding obtained. Experiments of link prediction and node classification on 5 real datasets show that our method is effective. © 2023 IEEE.
Keyword :
Extreme learning machine autoencoder Extreme learning machine autoencoder Graph autoencoder Graph autoencoder Graph embedding Graph embedding Link prediction Link prediction Node classification Node classification
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, X. , Chen, X. , Lin, Y. . Graph Convolutional Extreme Learning Machine Autoencoder for Graph Embedding [未知]. |
MLA | Lin, X. et al. "Graph Convolutional Extreme Learning Machine Autoencoder for Graph Embedding" [未知]. |
APA | Lin, X. , Chen, X. , Lin, Y. . Graph Convolutional Extreme Learning Machine Autoencoder for Graph Embedding [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Artistic style transfer aims to transfer artistic styles to the content images. Although artistic style transfer has received great attention in recent years, most existing style transfer methods are still limited in practical applications due to the slow transfer speed, such as difficulty in efficiently processing images with large scales (e.g., 1024x1024 pixels). To address this issue, we propose a fast artistic style transfer method based on wavelet transforms. Specifically, we utilize the wavelet transform to compress the content image and extract the low frequency component of the content image, which is an approximation of the content image and a quarter the size of the content image. We only stylize the low frequency component of the content image, which can effectively accelerate the style transfer. Experimental results demonstrate that our method can generate high quality stylized images with high efficiency. © 2023 IEEE.
Keyword :
artistic style transfer artistic style transfer model-optimization methods model-optimization methods wavelet transforms wavelet transforms
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, X. , Chen, X. . Fast Artistic Style Transfer via Wavelet Transforms [未知]. |
MLA | Zhang, X. et al. "Fast Artistic Style Transfer via Wavelet Transforms" [未知]. |
APA | Zhang, X. , Chen, X. . Fast Artistic Style Transfer via Wavelet Transforms [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Existing methods based on graph convolutional neural network have made some achievements in graph clustering, but most of them do not consider how to obtain the most beneficial embedding representation for clustering when the clustering structure cannot be predicted, and cannot maximize the use of the graph structural and attributed information and their relationship. In this paper, we propose a marginalized graph autoencoder with subspace structure preserving, which adds a self-expressive layer to reveal the clustering structure of node attributes based on the marginalized graph autoencoder, so that the output of the autoencoder maintains the multi-subspace structure of the input feature matrix and matches the clustering target, resulting in a cluster-oriented graph representation to improve clustering performance. Experiments on four public datasets show that the algorithm can effectively improve the performance of the graph clustering. © 2023 IEEE.
Keyword :
Convolutional neural networks Convolutional neural networks Graph algorithms Graph algorithms Graph neural networks Graph neural networks Graph structures Graph structures Learning systems Learning systems
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Yan , Chen, Xiaoyun . Marginalized Graph Autoencoder with Subspace Structure Preserving [C] . 2023 : 70-76 . |
MLA | Li, Yan et al. "Marginalized Graph Autoencoder with Subspace Structure Preserving" . (2023) : 70-76 . |
APA | Li, Yan , Chen, Xiaoyun . Marginalized Graph Autoencoder with Subspace Structure Preserving . (2023) : 70-76 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Community detection is an important tool to analyze and understand large-scale graph networks. Traditional non-negative matrix factorization method use the complete adjacency matrix, in which the redundant information may interfere with the learning of node features and bring high computational complexity. Thus, we propose the Node2vec enhanced attributed graph matrix factorization algorithm(N2V-AGMF) which combining the graph embedding and non-negative matrix factorization. The algorithm uses the Node2vec to extract node low-dimensional features based on topological information, then combines them with the attribute matrix for joint matrix factorization. The low dimensional embedding of the adjacency matrix enriches the representation of node features, and effectively reduces the high computational complexity caused by the factorization of the high dimensional matrix. Experiments were carried out on 5 real world data sets to verify the effectiveness of the algorithm. © 2023 SPIE.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zheng, Z. , Chen, X. . Node2vec Enhanced Attributed Graph Matrix Factorization Algorithm [未知]. |
MLA | Zheng, Z. et al. "Node2vec Enhanced Attributed Graph Matrix Factorization Algorithm" [未知]. |
APA | Zheng, Z. , Chen, X. . Node2vec Enhanced Attributed Graph Matrix Factorization Algorithm [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
处理高维复杂数据的聚类问题,通常需先降维后聚类,但常用的降维方法未考虑数据的同类聚集性和样本间相关关系,难以保证降维方法与聚类算法相匹配,从而导致聚类信息损失.非线性无监督降维方法极限学习机自编码器(Extreme learning machine, ELM-AE)因其学习速度快、泛化性能好,近年来被广泛应用于降维及去噪.为使高维数据投影至低维空间后仍能保持原有子空间结构,提出基于子空间结构保持的多层极限学习机自编码器降维方法 (Multilayer extreme learning machine autoencoder based on subspace structure preserv...
Keyword :
多层极限学习机 多层极限学习机 子空间学习 子空间学习 自编码器 自编码器 降维 降维
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 陈晓云 , 陈媛 . 子空间结构保持的多层极限学习机自编码器 [J]. | 自动化学报 , 2022 , 48 (04) : 1091-1104 . |
MLA | 陈晓云 et al. "子空间结构保持的多层极限学习机自编码器" . | 自动化学报 48 . 04 (2022) : 1091-1104 . |
APA | 陈晓云 , 陈媛 . 子空间结构保持的多层极限学习机自编码器 . | 自动化学报 , 2022 , 48 (04) , 1091-1104 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |