نتایج جستجو برای: slight pruning 2

تعداد نتایج: 2557222  

2011
Makoto Oshima Koji Yamada Satoshi Endo

In this study, we tackled the reduction of computational complexity by pruning the igo game tree using the potential model based on the knowledge expression of igo. The potential model considers go stones as potentials. Specific potential distributions on the go board result from each arrangement of the stones on the go board. Pruning using the potential model categorizes the legal moves into e...

1998
Dee Jay Randall Howard J. Hamilton Robert J. Hilderman

This paper addresses the problem of generalizing temporal data based on calendar (date and time) attributes. The proposed method is based on a domain generalization graph, i.e., a lattice deening a partial order that represents a set of generalization relations for the attribute. We specify the components of a domain generalization graph suited to calendar attributes. We deene granularity, subs...

2018
Konstantinos Pitas Mike Davies Pierre Vandergheynst

Recent DNN pruning algorithms have succeeded in reducing the number of parameters in fully connected layers, often with little or no drop in classification accuracy. However, most of the existing pruning schemes either have to be applied during training or require a costly retraining procedure after pruning to regain classification accuracy. We start by proposing a cheap pruning algorithm for f...

1995
Lee Naish

The logic programming community has a love{hate relationship with operators for pruning the search space of logic programs such as cut, commit, once, conditionals and variations on these. Pruning operators typically are not declarative, result in incom-pleteness and/or unsoundness, decrease readability and exibility of code and make program analysis and transformation more diicult. Despite this...

Journal: :Neurocomputing 2002
Iulian B. Ciocoiu

A new supervised learning procedure for training RBF networks is proposed. It uses a pair of parallel running Kalman filters to sequentially update both the output weights and the centers of the network. The method offers advantages over the joint parameters vector approach in terms of memory requirements and training time. Simulation results for chaotic time series prediction and the 2-spirals...

2017
Sajid Anwar Wonyong Sung

The learning capability of a neural network improves with increasing depth at higher computational costs. Wider layers with dense kernel connectivity patterns further increase this cost and may hinder real-time inference. We propose feature map and kernel pruning for reducing the computational complexity of a deep convolutional neural network. Due to coarse nature, these pruning granularities c...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید