Learning (Complex) Structural Descriptions from Examples
نویسندگان
چکیده
SUMMARY We present a formalization of an intuitively sound strategy for learning a description from examples : within a partition examples are grouped according to greatest resemblances and examples not in the same subset show a maximum of differences. I. INTRODUCTION WINSTON [4] has demonstrated the importance of the near-miss concept in a context of learning descriptions from examples. His methodology is practical when a few simple scenes are dealt with. We have extended it to include numerous complex examples. The definition of the near miss concept [3 ,4] will be summarized in section 3. A problem arises from the fact that a large number of near-misses can be obtained which do not convey the same type of information. Our experience shows that at least three types of near-misses must be introduced : highly ambiguous, ambiguous and discriminant near-misses, each conveying a different type of information. A second problem concerns the building of a structural description when several examples of several concepts are given, i.e. there are so many possible descriptions that a choice must be made of the most suitable as a recognition device. This leads us to define "promising" partitions of a set of examples. Given a set of examples, the description will be a tree recursively constructed by the rule : divide the set into its most promising partitions. An example of our methodology is given below. This example is actually too simple for the system and must be considered only as an illustration of the definitions we propose.
منابع مشابه
Efficient Learning of Context-Free Grammars from Positive Structural Examples
In this paper, we introduce a new normal form for context-free grammars, called reversible context-free grammars, for the problem of learning context-free grammars from positive-only examples. A context-free grammar G = (N, Z, P, S) is said to be reversible if (1) A + G( and B -+ a in P implies A = B and (2) A -+ a@ and A --f aCfl in P implies B = C. We show that the class of reversible context...
متن کاملThe dissimilarity space: Bridging structural and statistical pattern recognition
Human experts constitute pattern classes of natural objects based on their observed appearance. Automatic systems for pattern recognition may be designed on a structural description derived from sensor observations. Alternatively, training sets of examples can be used in statistical learning procedures. They are most powerful for vectorial object representations. Unfortunately, structural descr...
متن کاملRepresentation Changes for Efficient Learning in Structural Domains
This paper presents an efficient approach to address the task of learning from large number of learning examples in structural domains. While in attribute-value representations only one mapping is possible between descriptions, in first order logic representations there are potentially many mappings. Classic approaches consider all mappings and then define a restricted hypothesis space to cope ...
متن کاملLearning Concept Descriptions from Examples with Errors
This paper presents a scheme for learning complex descriptions, such as logic formulas, from examples with errors. The basis for learning is provided by a selection criterion which minimizes a combined measure of discrepancy of a description with training data, and complexity of a description. Learning rules for two types of descriptors are derived: one for finding descriptors with good average...
متن کاملA Comparative Review of Selected Methods for Learning from Examples
Research in the area of learning structural descriptions from examples is reviewed, giving primary attention to methods of learning characteristic descrip tions of single concepts. In particular, we examine methods for finding the maximally-specific conjunctive generalizations (MSC-generalizations) that cover all of the training examples of a given concept. Various important aspects of structu...
متن کامل