Neuron-by-Neuron Quantization for Efficient Low-Bit QNN Training
نویسندگان
چکیده
Quantized neural networks (QNNs) are widely used to achieve computationally efficient solutions recognition problems. Overall, eight-bit QNNs have almost the same accuracy as full-precision networks, but working several times faster. However, with lower quantization levels demonstrate inferior in comparison their classical analogs. To solve this issue, a number of quantization-aware training (QAT) approaches were proposed. In paper, we study QAT for two- linear schemes and propose new combined approach: neuron-by-neuron straight-through estimator (STE) gradient forwarding. It is suitable quantizations widths eliminates significant drops during training, which results better final QNN. We experimentally evaluate our approach on CIFAR-10 ImageNet classification show that it comparable other four eight bits outperforms some them two three while being easier implement. For example, proposed three-bit dataset 73.2% accuracy, baseline direct layer-by-layer result 71.4% 67.2% respectively. The two-bit ResNet18 63.69% 61.55% baseline.
منابع مشابه
A new circuit model for the Parameters in equations of low power Hodgkin-Huxley neuron cell
In this paper, α and β parameters and gating variables equations of Hodgkin-Huxley neuron cell have been studied. Gating variables show opening and closing rate of ion flow of calcium and potassium in neuron cell. Variable functions α and β, are exponential functions in terms of u potential that have been obtained by Hodgkin and Huxley experimentally to adjust the equations of neural cells. In ...
متن کاملImplementation of a programmable neuron in CNTFET technology for low-power neural networks
Circuit-level implementation of a novel neuron has been discussed in this article. A low-power Activation Function (AF) circuit is introduced in this paper, which is then combined with a highly linear synapse circuit to form the neuron architecture. Designed in Carbon Nanotube Field-Effect Transistor (CNTFET) technology, the proposed structure consumes low power, which makes it suitable for the...
متن کاملTraining neuron models with the Informax principle
In terms of the Informax principle, and the input–output relationship of the integrate-and-(re (IF) model, IF neuron learning rules are developed and applied to blind separation tasks. c © 2002 Published by Elsevier Science B.V.
متن کاملEfficient estimation of detailed single-neuron models.
Biophysically accurate multicompartmental models of individual neurons have significantly advanced our understanding of the input-output function of single cells. These models depend on a large number of parameters that are difficult to estimate. In practice, they are often hand-tuned to match measured physiological behaviors, thus raising questions of identifiability and interpretability. We p...
متن کاملEmergent Activation Functions from a Stochastic Bit Stream Neuron
In this paper we present the results of experimental work that demonstrates the generation of linear and sigmoid activation functions in a digital stochastic bit stream neuron These activation functions are generated by a stochastic process and require no additional hardware allowing the design of an individual neuron to be extremely compact Introduction An arti cial neuron is required to calcu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematics
سال: 2023
ISSN: ['2227-7390']
DOI: https://doi.org/10.3390/math11092112