• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Wu, Mou (Wu, Mou.) [1] | Xiong, Naixue (Xiong, Naixue.) [2] | Vasilakos, Athanasios V. (Vasilakos, Athanasios V..) [3] | Leung, Victor C. M. (Leung, Victor C. M..) [4] | Chen, C. L. Philip (Chen, C. L. Philip.) [5]

Indexed by:

EI

Abstract:

With the rise of the processing power of networked agents in the last decade, second-order methods for machine learning have received increasing attention. To solve the distributed optimization problems over multiagent systems, Newton's method has the benefits of fast convergence and high estimation accuracy. In this article, we propose a reinforced network Newton method with K -order control flexibility (RNN-K) in a distributed manner by integrating the consensus strategy and the latest knowledge across the network into local descent direction. The key component of our method is to make the best of intermediate results from the local neighborhood to learn global knowledge, not just for the consensus effect like most existing works, including the gradient descent and Newton methods as well as their refinements. Such a reinforcement enables revitalizing the traditional iterative consensus strategy to accelerate the descent of the Newton direction. The biggest difficulty to design the approximated Newton descent in distributed settings is addressed by using a special Taylor expansion that follows the matrix splitting technique. Based on the truncation on the Taylor series, our method also presents a tradeoff effect between estimation accuracy and computation/communication cost, which provides the control flexibility as a practical consideration. We derive theoretically the sufficient conditions for the convergence of the proposed RNN-K method of at least a linear rate. The simulation results illustrate the performance effectiveness by being applied to three types of distributed optimization problems that arise frequently in machine-learning scenarios. © 2013 IEEE.

Keyword:

Gradient methods Machine learning Multi agent systems Newton-Raphson method Optimization Reinforcement Taylor series

Community:

  • [ 1 ] [Wu, Mou]Tianjin University, College of Intelligence and Computing, Tianjin; 300350, China
  • [ 2 ] [Wu, Mou]Hubei University of Science and Technology, School of Computer Science and Technology, Xianning; 437100, China
  • [ 3 ] [Xiong, Naixue]Tianjin University, College of Intelligence and Computing, Tianjin; 300350, China
  • [ 4 ] [Vasilakos, Athanasios V.]Fuzhou University, Department of Computer Science and Technology, Fuzhou; 350116, China
  • [ 5 ] [Leung, Victor C. M.]Shenzhen University, College of Computer Science and Software Engineering, Shenzhen; 518060, China
  • [ 6 ] [Leung, Victor C. M.]University of British Columbia, Department of Electrical and Computer Engineering, Vancouver; V6T 1Z4, Canada
  • [ 7 ] [Chen, C. L. Philip]South China University of Technology, School of Computer Science and Engineering, Guangzhou; 510006, China
  • [ 8 ] [Chen, C. L. Philip]Dalian Maritime University, Navigation College, Dalian; 116026, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

IEEE Transactions on Cybernetics

ISSN: 2168-2267

Year: 2022

Issue: 5

Volume: 52

Page: 4012-4026

1 1 . 8

JCR@2022

9 . 4 0 0

JCR@2023

ESI HC Threshold:61

JCR Journal Grade:1

CAS Journal Grade:1

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count: 18

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 0

Affiliated Colleges:

Online/Total:61/10135797
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1