Partial Rule Weighting Using Single-Layer Perceptron
نویسندگان
چکیده
Inductive Logic Programming (ILP) has been widely used in Knowledge Discovery in Databases (KDD). The ordinary ILP systems work in two-class domains, not in multi-class domains. We have proposed the method which is be able to help ILP in multi-class domains by using the partial rules extracted from the ILP’s rules combined with weighting algorithm to classify unseen examples. In this paper, we improve the weighting algorithm by using single layer perceptron. The learned weights from the perceptrons and the partial rules are then combined to represent the knowledge extracted from the domains. The accuracy of the proposed method on classification of a real-world data set, dopamine antagonist molecules, shows that our approach can remarkably improve the previous weighting algorithm and the original ILP’s rules.
منابع مشابه
The Efficiency and the Robustness of Natural Gradient Descent Learning Rule
The inverse of the Fisher information matrix is used in the natural gradient descent algorithm to train single-layer and multi-layer perceptrons. We have discovered a new scheme to represent the Fisher information matrix of a stochastic multi-layer perceptron. Based on this scheme, we have designed an algorithm to compute the natural gradient. When the input dimension n is much larger than the ...
متن کاملA Margin-based Model with a Fast Local Searchnewline for Rule Weighting and Reduction in Fuzzynewline Rule-based Classification Systems
Fuzzy Rule-Based Classification Systems (FRBCS) are highly investigated by researchers due to their noise-stability and interpretability. Unfortunately, generating a rule-base which is sufficiently both accurate and interpretable, is a hard process. Rule weighting is one of the approaches to improve the accuracy of a pre-generated rule-base without modifying the original rules. Most of the pro...
متن کاملA learning rule for very simple universal approximators consisting of a single layer of perceptrons
One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with valu...
متن کاملApplication of Artificial Neural Network, Kriging, and Inverse Distance Weighting Models for Estimation of Scour Depth around Bridge Pier with Bed Sill
This paper outlines the application of the multi-layer perceptron artificial neural network (ANN), ordinary kriging (OK), and inverse distance weighting (IDW) models in the estimation of local scour depth around bridge piers. As part of this study, bridge piers were installed with bed sills at the bed of an experimental flume. Experimental tests were conducted under different flow conditions an...
متن کاملThe Effect of Decision Surface Fitness on Dynamic Multi-layer Perceptron Networks (DMP1)
The DMP1 (Dynamic Multi-layer Perceptron 1) network training method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. This paper introduces the DMP1 method, and compares the preformance of DMP1 when using the standard delta rule training method for training individual nodes against the performance of...
متن کامل