Obtaining Accurate Error Expressions and Bounds for Floating-Point Multiplicative Algorithms
Multiplicative Newton–Raphson and Goldschmidt algorithms are widely used in current processors to implement division, reciprocal, square root and square root reciprocal operations. Based on an initial approximation of a given accuracy, several iterations are performed until the required result accuracy is achieved. The number of iterations depends on the initial approximation and on the required accuracy. Each iteration consists of several multiplications. In this paper, we present an accurate error analysis that takes into account all the contributions to the final error and allows us to obtain error bounds for each iteration. These error bounds can be used to obtain optimal unit designs by reducing the size of the multiplier and, therefore, to reduce the area requirements. To show the usefulness of the error analysis, we compare the optimal multiplier size obtained from our error analysis with the multipliers in the floating-point division and square root units of some popular processors and we conclude that the multiplier size and its area can be reduced by, roughly, 10%.
keywords: floating-point arithmetic, reciprocal and division computation, reciprocal square root and square root computation, rounding, error estimation, error bounds