Reduced Precision Checking to Detect Errors in Floating Point Arithmetic

نویسندگان

  • Yaqi Zhang
  • Ralph Nathan
  • Daniel J. Sorin
چکیده

We use reduced precision checking (RPC) to detect errors in floating point arithmetic. Prior work explored RPC for addition and multiplication. In this work, we extend RPC to a complete floating point unit (FPU), including division and square root, and we present precise analyses of the errors undetectable with RPC that show bounds that are smaller than prior work. We implement RPC for a complete FPU in RTL and experimentally evaluate its error coverage and cost.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Family of Variable-Precision Interval Arithmetic Processors

ÐTraditional computer systems often suffer from roundoff error and catastrophic cancellation in floating point computations. These systems produce apparently high precision results with little or no indication of the accuracy. This paper presents hardware designs, arithmetic algorithms, and software support for a family of variable-precision, interval arithmetic processors. These processors giv...

متن کامل

Design and Implementation of a High Precision Arithmetic with Rigorous Error Bounds

In this paper we present the design of a rigorous, high precision floating point arithmetic. The algorithms presented are implementation in FORTRAN, and are made available through the COSY INFINITY rigorous computation package. The three design objectives for these high precision intervals are high speed, particularly for the elementary operations, absolutely rigorous treatment of roundoff erro...

متن کامل

An FPGA-Based Face Detector Using Neural Network and a Scalable Floating Point Unit

The study implemented an FPGA-based face detector using Neural Networks and a scalable Floating Point arithmetic Unit (FPU). The FPU provides dynamic range and reduces the bit of the arithmetic unit more than fixed point method does. These features led to reduction in the memory so that it is efficient for neural networks system with large size data bits. The arithmetic unit occupies 39~45% of ...

متن کامل

Stochastic Arithmetic in Multiprecision

Floating-point arithmetic precision is limited in length the IEEE single (respectively double) precision format is 32-bit (respectively 64-bit) long. Extended precision formats can be up to 128-bit long. However some problems require a longer floating-point format, because of round-off errors. Such problems are usually solved in arbitrary precision, but round-off errors still occur and must be ...

متن کامل

CAMPARY: Cuda Multiple Precision Arithmetic Library and Applications

Many scientific computing applications demand massive numerical computations on parallel architectures such as Graphics Processing Units (GPUs). Usually, either floating-point single or double precision arithmetic is used. Higher precision is generally not available in hardware, and software extended precision libraries are much slower and rarely supported on GPUs. We develop CAMPARY: a multipl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1510.01145  شماره 

صفحات  -

تاریخ انتشار 2015