Clustered Federated Learning Based on Momentum Gradient Descent for Heterogeneous Data

نویسندگان

چکیده

Data heterogeneity may significantly deteriorate the performance of federated learning since client’s data distribution is divergent. To mitigate this issue, an effective method to partition these clients into suitable clusters. However, existing clustered only based on gradient descent method, which leads poor convergence performance. accelerate rate, paper proposes momentum (CFL-MGD) by integrating and cluster techniques. In CFL-MGD, scattered are partitioned same when they have tasks. Meanwhile, each client in utilizes their own private update local model parameters through descent. Moreover, we present averaging for global aggregation, respectively. understand proposed algorithm, also prove that CFL-MGD converges at exponential rate smooth strongly convex loss functions. Finally, validate effectiveness CIFAR-10 MNIST datasets.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Using Gradient Descent Optimization for Acoustics Training from Heterogeneous Data

In this paper, we study the use of heterogeneous data for training of acoustic models. In initial experiments, a significant drop of accuracy has been observed on in-domain test set if the data was added without any regularization. A solution is proposed by getting control over the training data by optimization of the weights of different data-sets. The final models shows good performance on al...

متن کامل

The Momentum Term in Gradient Descent 11

A momentum term is usually included in the simulations of connectionist learning algorithms. Although it is well known that such a term greatly improves the speed of learning, there have been few rigorous studies of its mechanisms. In this paper, I show that in the limit of continuous time, the momentum parameter is analogous to the mass of Newtonian particles that move through a viscous medium...

متن کامل

A New Rule-weight Learning Method based on Gradient Descent

In this paper, we propose a simple and efficient method to construct an accurate fuzzy classification system. In order to optimize the generalization accuracy, we use ruleweight as a simple mechanism to tune the classifier and propose a new learning method to iteratively adjust the weight of fuzzy rules. The rule-weights in the proposed method are derived by solving the minimization problem thr...

متن کامل

Learning to learn by gradient descent by gradient descent

The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorit...

متن کامل

Learning to Learn without Gradient Descent by Gradient Descent

We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-paramete...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Electronics

سال: 2023

ISSN: ['2079-9292']

DOI: https://doi.org/10.3390/electronics12091972