Learning Probabilistic Programs Using Backpropagation

نویسنده

  • Avi Pfeffer
چکیده

Probabilistic modeling enables combining domain knowledge with learning from data, thereby supporting learning from fewer training instances than purely data-driven methods. However, learning probabilistic models is difficult and has not achieved the level of performance of methods such as deep neural networks on many tasks. In this paper, we attempt to address this issue by presenting a method for learning the parameters of a probabilistic program using backpropagation. Our approach opens the possibility to building deep probabilistic programming models that are trained in a similar way to neural networks.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Deep Probabilistic Logic Programming

Probabilistic logic programming under the distribution semantics has been very useful in machine learning. However, inference is expensive so machine learning algorithms may turn out to be slow. In this paper we consider a restriction of the language called hierarchical PLP in which clauses and predicates are hierarchically organized. In this case the language becomes truth-functional and infer...

متن کامل

Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

Large multilayer neural networks trained with backpropagation have recently achieved state-ofthe-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Baye...

متن کامل

Combining Neural and Symbolic Learning to Revise Probabilistic Rule Bases

This paper describes RAPTURE a system for revising probabilistic knowledge bases that combines neural and symbolic learning methods. RAPTURE uses a modified version of backpropagation to refine the certainty factors of a MYCIN-style rule base and uses ID3's information gain heuristic to add new rules. Results on refining two actual expert knowledge bases demonstrate that this combined approach ...

متن کامل

Parallel Probabilistic Neural Network ( PPNN ) 1 Parallel Probabilistic Neural Network ( PPNN )

---It was pointed out in this paper that the planar topology of current backpropagation neural network (BPNN) sets limits to solve the slow convergence rate problem, local minima, incapability of learning etc. problems associated with BPNN. The parallel probabilistic neural network (PPNN) using a new neural network topology---stereotopology----was proposed to overcome these problems. The learni...

متن کامل

Combining Connectionist and Symbolic Learning to Reene Certainty-factor Rule Bases

This paper describes Rapture | a system for revising probabilistic knowledge bases that combines connectionist and symbolic learning methods. Rapture uses a modiied version of backpropagation to reene the certainty factors of a probabilistic rule base and it uses ID3's information-gain heuristic to add new rules. Results on reening three actual expert knowledge bases demonstrate that this combi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1705.05396  شماره 

صفحات  -

تاریخ انتشار 2017