Deep learning for gradient flows using the Brezis–Ekeland principle

نویسندگان

چکیده

We propose a deep learning method for the numerical solution of partial differential equations that arise as gradient flows. The relies on Brezis–Ekeland principle, which naturally defines an objective function to be minimized, and so is ideally suited machine approach using neural networks. describe our in general framework illustrate with help example implementation heat equation space dimensions two seven.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Variational Principle for Gradient Flows of Nonconvex Energies

We present a variational approach to gradient flows of energies of the form E = φ1−φ2 where φ1, φ2 are convex functionals on a Hilbert space. A global parameter-dependent functional over trajectories is proved to admit minimizers. These minimizers converge up to subsequences to gradient-flow trajectories as the parameter tends to zero. These results apply in particular to the case of non λ-conv...

متن کامل

Bottom-up Deep Learning using the Hebbian Principle

The “fire together, wire together” Hebbian learning model is a central principle in neuroscience, but, surprisingly, it has found limited applicability in modern machine learning. In this paper, we show that neuro-plausible variants of competitive Hebbian learning provide a promising foundation for bottom-up deep learning. We propose an unsupervised learning algorithm termed Adaptive Hebbian Le...

متن کامل

Maximum Principle Based Algorithms for Deep Learning

The continuous dynamical system approach to deep learning is explored in order to devise alternative frameworks for training algorithms. Training is recast as a control problem and this allows us to formulate necessary optimality conditions in continuous time using the Pontryagin’s maximum principle (PMP). A modification of the method of successive approximations is then used to solve the PMP, ...

متن کامل

Annealed Gradient Descent for Deep Learning

Stochastic gradient descent (SGD) has been regarded as a successful optimization algorithm in machine learning. In this paper, we propose a novel annealed gradient descent (AGD) method for non-convex optimization in deep learning. AGD optimizes a sequence of gradually improved smoother mosaic functions that approximate the original non-convex objective function according to an annealing schedul...

متن کامل

Natural Gradient Deep Q-learning

This paper presents findings for training a Q-learning reinforcement learning agent using natural gradient techniques. We compare the original deep Q-network (DQN) algorithm to its natural gradient counterpart (NGDQN), measuring NGDQN and DQN performance on classic controls environments without target networks. We find that NGDQN performs favorably relative to DQN, converging to significantly b...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Archivum mathematicum

سال: 2023

ISSN: ['0044-8753', '1212-5059']

DOI: https://doi.org/10.5817/am2023-3-249