Indexed by:
Abstract:
To enhance the performance of machine learning algorithms, overcome the curse of dimensionality, and maintain model interpretability, there are significant challenges that continue to confront fuzzy systems (FS). Mini-batch Gradient Descent (MBGD) is characterized by its fast convergence and strong generalization performance. However, its applications have been generally restricted to the low-dimensional problems with small datasets. In this paper, we propose a novel deep-learning-based prediction method. This method optimizes deep neural-fuzzy systems (ODNFS) by considering the essential correlations of external and internal factors. Specifically, the Maximal Information Coefficient (MIC) is used to sort features based on their significance and eliminate the least relevant ones, and then the uniform regularization is introduced, which enforces consistency in the average normalized activation levels across rules. An improved novel MBGD technique with DropRule and AdaBound (MBGD-RDA) is put forward to train deep fuzzy systems for the training of each sub-FS in a fashion of layer by layer. Experiments on several datasets show that the ODNFS can effectively balance the efficiency, accuracy, and stability within the system, which can be used for training datasets of any size. The proposed ODNFS outperforms MBGD-RDA and the state-of-the-art methods in terms of accuracy and generalization, with fewer parameters and rules.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
INTERNATIONAL JOURNAL OF FUZZY SYSTEMS
ISSN: 1562-2479
Year: 2025
3 . 6 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: