نتایج جستجو برای: information gain

تعداد نتایج: 1294430  

2016
Nicholas C Fung Carlos Nieto-Granda Jason M Gregory John G Rogers

Journal: :Neurocomputing 2005
Yuguo Yu Tai Sing Lee

Contrast gain control is an important mechanism underlying the visual system’s adaptation to contrast of luminance in varying visual environments. Our previous work showed that the threshold and saturation determine the preferred contrast sensitivity as well as the maximum information coding capacity of the neuronal model. In this report, we investigated the design principles underlying adaptat...

2011
Linmin Yang Zhe Dang Thomas R. Fischer

For model-based black-box testing, test cases are often selected from the syntactic appearance of the specification of the system under test, according to a pre-given test data adequacy criterion. We introduce a novel approach that is semantics-based, independent of the syntactic appearance of the system specification. Basically, we model the system under test as a random variable, whose sample...

Journal: :Psychological science 2010
Jonathan D Nelson Craig R M McKenzie Garrison W Cottrell Terrence J Sejnowski

Deciding which piece of information to acquire or attend to is fundamental to perception, categorization, medical diagnosis, and scientific inference. Four statistical theories of the value of information-information gain, Kullback-Liebler distance, probability gain (error minimization), and impact-are equally consistent with extant data on human information acquisition. Three experiments, desi...

2011
Sven Stringer Denny Borsboom Eric-Jan Wagenmakers

One of the most popular paradigms to use for studying human reasoning involves the Wason card selection task. In this task, the participant is presented with four cards and a conditional rule (e.g., "If there is an A on one side of the card, there is always a 2 on the other side"). Participants are asked which cards should be turned to verify whether or not the rule holds. In this simple task, ...

Journal: :Entropy 2016
Renata Rychtáriková Jan Korbel Petr Machácek Petr Císar Jan Urban Dalibor Stys

We generalize the point information gain (PIG) and derived quantities, i.e., point information gain entropy (PIE) and point information gain entropy density (PIED), for the case of the Rényi entropy and simulate the behavior of PIG for typical distributions. We also use these methods for the analysis of multidimensional datasets. We demonstrate the main properties of PIE/PIED spectra for the re...

2017
Meenu Dave Rashmi Agrawal Mary Jean Harrold R. A. DeMillo D. S. Guindi K. N. King W. M. McCracken

The test suite optimization during test case generation can save time and cost. The paper presents an information theory based metric to filter the redundant test cases and reduce the test suite size while, maintaining the coverage of the requirements and with minimum loss to mutant coverage. The paper propose two versions, RR and RR2. RR filters test cases for each requirement, where as, RR2 f...

2010
Alec Pawling Nitesh V. Chawla Amitabh Chaudhary

Computing information gain in general data streams, in which we do not make any assumptions on the underlying distributions or domains, is a hard problem, severely constrained by the limitations on memory space. We present a simple randomized solution to this problem that is time and space efficient as well as tolerates a relative error that has a theoretical upper bound. It is based on a novel...

2013
Antony Selvadoss Thanamani

The attribute reduction is one of the key processes for knowledge acquisition. Some data set is multidimensional and larger in size. If that data set is used for classification it may end with wrong results and it may also occupy more resources especially in terms of time. Most of the features present are redundant and inconsistent and affect the classification. In order to improve the efficien...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید