Greedy training algorithms for neural networks and applications to PDEs

نویسندگان

چکیده

Recently, neural networks have been widely applied for solving partial differential equations (PDEs). Although such methods proven remarkably successful on practical engineering problems, they not shown, theoretically or empirically, to converge the underlying PDE solution with arbitrarily high accuracy. The primary difficulty lies in highly non-convex optimization problems resulting from network discretization, which are difficult treat both and practically. It is our goal this work take a step toward remedying this. For purpose, we develop novel greedy training algorithm shallow networks. Our method applicable variational formulation of also residual minimization pioneered by physics informed (PINNs). We analyze obtain priori error bounds when PDEs function class defined networks, rigorously establishes convergence as size increases. Finally, test several benchmark examples, including dimensional PDEs, confirm theoretical rate. expensive relative traditional approaches finite element methods, view proof concept network-based shows that numerical based upon can be shown converge.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regularized Greedy Algorithms for Neural Network Training with Data Noise

The aim of this paper is to construct a modified greedy algorithm applicable for an ill-posed function approximation problem in presence of data noise. This algorithm, coupled with a suitable stopping rule, can be interpreted as an iterative regularization method. We provide a detailed convergence analysis of the algorithm in presence of noise, and discuss optimal choices of parameters. As a co...

متن کامل

Neural networks: Algorithms and applications

This special issue of Neurocomputing includes 24 original articles which are extended versions of selected papers from the Fourth International Symposium on Neural Networks (ISNN 2007). As a sequel to ISNN 2004/ISNN 2005/ISNN 2006, ISNN 2007 was held in June 2007, in Nanjing, China, an old capital of China and a modern metropolis with 2470year history. ISNN 2007 provided a high-level internatio...

متن کامل

Effective training algorithms for RBF-neural networks

New structure and training algorithms of the RBF-type neural network are proposed. An extra neuron layer is added to realize the principal component method. A novel training algorithm is designed for training each separate neuron in the hidden layer, which guarantees the efficiency and finiteness of the training procedure. Results were obtained for a variety of problems. In particular, the resu...

متن کامل

Analysis and Comparison of Algorithms for Training Recurrent Neural Networks

How did that work? No need to concern myself; the majority of people benefit from the technology of their civilization without understanding it.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Computational Physics

سال: 2023

ISSN: ['1090-2716', '0021-9991']

DOI: https://doi.org/10.1016/j.jcp.2023.112084