New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
/* Configuration for math routines.
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
Copyright (c) 2017-2018 Arm Ltd. All rights reserved.
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
|
2018-09-28 01:03:32 +08:00
|
|
|
SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
Redistribution and use in source and binary forms, with or without
|
|
|
|
modification, are permitted provided that the following conditions
|
|
|
|
are met:
|
|
|
|
1. Redistributions of source code must retain the above copyright
|
|
|
|
notice, this list of conditions and the following disclaimer.
|
|
|
|
2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
notice, this list of conditions and the following disclaimer in the
|
|
|
|
documentation and/or other materials provided with the distribution.
|
|
|
|
3. The name of the company may not be used to endorse or promote
|
|
|
|
products derived from this software without specific prior written
|
|
|
|
permission.
|
|
|
|
|
2018-09-28 01:03:32 +08:00
|
|
|
THIS SOFTWARE IS PROVIDED BY ARM LTD ``AS IS'' AND ANY EXPRESS OR IMPLIED
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
|
|
|
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
|
|
|
IN NO EVENT SHALL ARM LTD BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
|
|
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
|
|
|
|
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
|
|
|
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
|
|
|
|
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
|
|
|
|
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
|
|
|
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */
|
|
|
|
|
|
|
|
#ifndef _MATH_CONFIG_H
|
|
|
|
#define _MATH_CONFIG_H
|
|
|
|
|
|
|
|
#include <math.h>
|
|
|
|
#include <stdint.h>
|
|
|
|
|
|
|
|
#ifndef WANT_ROUNDING
|
|
|
|
/* Correct special case results in non-nearest rounding modes. */
|
|
|
|
# define WANT_ROUNDING 1
|
|
|
|
#endif
|
2020-08-04 01:55:03 +08:00
|
|
|
#ifdef _IEEE_LIBM
|
|
|
|
# define WANT_ERRNO 0
|
2020-08-05 06:22:24 +08:00
|
|
|
# define _LIB_VERSION _IEEE_
|
2020-08-04 01:55:03 +08:00
|
|
|
#else
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
/* Set errno according to ISO C with (math_errhandling & MATH_ERRNO) != 0. */
|
|
|
|
# define WANT_ERRNO 1
|
2020-08-05 06:22:24 +08:00
|
|
|
# define _LIB_VERSION _POSIX_
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
#endif
|
|
|
|
#ifndef WANT_ERRNO_UFLOW
|
|
|
|
/* Set errno to ERANGE if result underflows to 0 (in all rounding modes). */
|
|
|
|
# define WANT_ERRNO_UFLOW (WANT_ROUNDING && WANT_ERRNO)
|
|
|
|
#endif
|
|
|
|
|
2020-08-05 06:22:24 +08:00
|
|
|
#define _IEEE_ -1
|
|
|
|
#define _POSIX_ 0
|
|
|
|
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
/* Compiler can inline round as a single instruction. */
|
|
|
|
#ifndef HAVE_FAST_ROUND
|
|
|
|
# if __aarch64__
|
|
|
|
# define HAVE_FAST_ROUND 1
|
|
|
|
# else
|
|
|
|
# define HAVE_FAST_ROUND 0
|
|
|
|
# endif
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Compiler can inline lround, but not (long)round(x). */
|
|
|
|
#ifndef HAVE_FAST_LROUND
|
|
|
|
# if __aarch64__ && (100*__GNUC__ + __GNUC_MINOR__) >= 408 && __NO_MATH_ERRNO__
|
|
|
|
# define HAVE_FAST_LROUND 1
|
|
|
|
# else
|
|
|
|
# define HAVE_FAST_LROUND 0
|
|
|
|
# endif
|
|
|
|
#endif
|
|
|
|
|
2018-07-05 19:37:25 +08:00
|
|
|
/* Compiler can inline fma as a single instruction. */
|
|
|
|
#ifndef HAVE_FAST_FMA
|
2020-08-09 06:34:11 +08:00
|
|
|
# if __aarch64__ || (__ARM_FEATURE_FMA && (__ARM_FP & 8))
|
2018-07-05 19:37:25 +08:00
|
|
|
# define HAVE_FAST_FMA 1
|
|
|
|
# else
|
|
|
|
# define HAVE_FAST_FMA 0
|
|
|
|
# endif
|
|
|
|
#endif
|
|
|
|
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
#if HAVE_FAST_ROUND
|
2018-07-04 18:09:39 +08:00
|
|
|
/* When set, the roundtoint and converttoint functions are provided with
|
|
|
|
the semantics documented below. */
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
# define TOINT_INTRINSICS 1
|
|
|
|
|
2018-07-04 18:09:39 +08:00
|
|
|
/* Round x to nearest int in all rounding modes, ties have to be rounded
|
|
|
|
consistently with converttoint so the results match. If the result
|
|
|
|
would be outside of [-2^31, 2^31-1] then the semantics is unspecified. */
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
static inline double_t
|
|
|
|
roundtoint (double_t x)
|
|
|
|
{
|
|
|
|
return round (x);
|
|
|
|
}
|
|
|
|
|
2018-07-04 18:09:39 +08:00
|
|
|
/* Convert x to nearest int in all rounding modes, ties have to be rounded
|
|
|
|
consistently with roundtoint. If the result is not representible in an
|
|
|
|
int32_t then the semantics is unspecified. */
|
|
|
|
static inline int32_t
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
converttoint (double_t x)
|
|
|
|
{
|
|
|
|
# if HAVE_FAST_LROUND
|
|
|
|
return lround (x);
|
|
|
|
# else
|
|
|
|
return (long) round (x);
|
|
|
|
# endif
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
#ifndef TOINT_INTRINSICS
|
|
|
|
# define TOINT_INTRINSICS 0
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static inline uint32_t
|
|
|
|
asuint (float f)
|
|
|
|
{
|
|
|
|
union
|
|
|
|
{
|
|
|
|
float f;
|
|
|
|
uint32_t i;
|
|
|
|
} u = {f};
|
|
|
|
return u.i;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline float
|
|
|
|
asfloat (uint32_t i)
|
|
|
|
{
|
|
|
|
union
|
|
|
|
{
|
|
|
|
uint32_t i;
|
|
|
|
float f;
|
|
|
|
} u = {i};
|
|
|
|
return u.f;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline uint64_t
|
|
|
|
asuint64 (double f)
|
|
|
|
{
|
|
|
|
union
|
|
|
|
{
|
|
|
|
double f;
|
|
|
|
uint64_t i;
|
|
|
|
} u = {f};
|
|
|
|
return u.i;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline double
|
|
|
|
asdouble (uint64_t i)
|
|
|
|
{
|
|
|
|
union
|
|
|
|
{
|
|
|
|
uint64_t i;
|
|
|
|
double f;
|
|
|
|
} u = {i};
|
|
|
|
return u.f;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifndef IEEE_754_2008_SNAN
|
|
|
|
# define IEEE_754_2008_SNAN 1
|
|
|
|
#endif
|
|
|
|
static inline int
|
|
|
|
issignalingf_inline (float x)
|
|
|
|
{
|
|
|
|
uint32_t ix = asuint (x);
|
|
|
|
if (!IEEE_754_2008_SNAN)
|
|
|
|
return (ix & 0x7fc00000) == 0x7fc00000;
|
2018-09-01 06:10:00 +08:00
|
|
|
return 2 * (ix ^ 0x00400000) > 0xFF800000u;
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
}
|
|
|
|
|
2018-06-26 23:25:12 +08:00
|
|
|
static inline int
|
|
|
|
issignaling_inline (double x)
|
|
|
|
{
|
|
|
|
uint64_t ix = asuint64 (x);
|
|
|
|
if (!IEEE_754_2008_SNAN)
|
|
|
|
return (ix & 0x7ff8000000000000) == 0x7ff8000000000000;
|
|
|
|
return 2 * (ix ^ 0x0008000000000000) > 2 * 0x7ff8000000000000ULL;
|
|
|
|
}
|
|
|
|
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
#if __aarch64__ && __GNUC__
|
2018-06-23 01:12:26 +08:00
|
|
|
/* Prevent the optimization of a floating-point expression. */
|
|
|
|
static inline float
|
|
|
|
opt_barrier_float (float x)
|
|
|
|
{
|
|
|
|
__asm__ __volatile__ ("" : "+w" (x));
|
|
|
|
return x;
|
|
|
|
}
|
|
|
|
static inline double
|
|
|
|
opt_barrier_double (double x)
|
|
|
|
{
|
|
|
|
__asm__ __volatile__ ("" : "+w" (x));
|
|
|
|
return x;
|
|
|
|
}
|
|
|
|
/* Force the evaluation of a floating-point expression for its side-effect. */
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
static inline void
|
|
|
|
force_eval_float (float x)
|
|
|
|
{
|
|
|
|
__asm__ __volatile__ ("" : "+w" (x));
|
|
|
|
}
|
|
|
|
static inline void
|
|
|
|
force_eval_double (double x)
|
|
|
|
{
|
|
|
|
__asm__ __volatile__ ("" : "+w" (x));
|
|
|
|
}
|
|
|
|
#else
|
2018-06-23 01:12:26 +08:00
|
|
|
static inline float
|
|
|
|
opt_barrier_float (float x)
|
|
|
|
{
|
|
|
|
volatile float y = x;
|
|
|
|
return y;
|
|
|
|
}
|
|
|
|
static inline double
|
|
|
|
opt_barrier_double (double x)
|
|
|
|
{
|
|
|
|
volatile double y = x;
|
|
|
|
return y;
|
|
|
|
}
|
2018-08-08 16:41:58 +08:00
|
|
|
#pragma GCC diagnostic ignored "-Wunused-variable"
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
static inline void
|
|
|
|
force_eval_float (float x)
|
|
|
|
{
|
|
|
|
volatile float y = x;
|
|
|
|
}
|
|
|
|
static inline void
|
|
|
|
force_eval_double (double x)
|
|
|
|
{
|
|
|
|
volatile double y = x;
|
|
|
|
}
|
2018-08-08 16:41:58 +08:00
|
|
|
#pragma GCC diagnostic pop
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Evaluate an expression as the specified type, normally a type
|
|
|
|
cast should be enough, but compilers implement non-standard
|
|
|
|
excess-precision handling, so when FLT_EVAL_METHOD != 0 then
|
|
|
|
these functions may need to be customized. */
|
|
|
|
static inline float
|
|
|
|
eval_as_float (float x)
|
|
|
|
{
|
|
|
|
return x;
|
|
|
|
}
|
|
|
|
static inline double
|
|
|
|
eval_as_double (double x)
|
|
|
|
{
|
|
|
|
return x;
|
|
|
|
}
|
|
|
|
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
#ifdef __GNUC__
|
|
|
|
# define NOINLINE __attribute__ ((noinline))
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
# define likely(x) __builtin_expect (!!(x), 1)
|
|
|
|
# define unlikely(x) __builtin_expect (x, 0)
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
#else
|
|
|
|
# define NOINLINE
|
Improve performance of sinf/cosf/sincosf
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
2018-06-20 20:07:22 +08:00
|
|
|
# define likely(x) (x)
|
|
|
|
# define unlikely(x) (x)
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
#endif
|
|
|
|
|
2018-08-08 16:44:38 +08:00
|
|
|
/* gcc emitting PE/COFF doesn't support visibility */
|
|
|
|
#if defined (__GNUC__) && !defined (__CYGWIN__)
|
|
|
|
# define HIDDEN __attribute__ ((__visibility__ ("hidden")))
|
|
|
|
#else
|
|
|
|
# define HIDDEN
|
|
|
|
#endif
|
|
|
|
|
2018-07-03 19:54:36 +08:00
|
|
|
/* Error handling tail calls for special cases, with a sign argument.
|
|
|
|
The sign of the return value is set if the argument is non-zero. */
|
|
|
|
|
|
|
|
/* The result overflows. */
|
2018-06-26 00:39:27 +08:00
|
|
|
HIDDEN float __math_oflowf (uint32_t);
|
2018-07-03 19:54:36 +08:00
|
|
|
/* The result underflows to 0 in nearest rounding mode. */
|
2018-06-26 00:39:27 +08:00
|
|
|
HIDDEN float __math_uflowf (uint32_t);
|
2018-07-03 19:54:36 +08:00
|
|
|
/* The result underflows to 0 in some directed rounding mode only. */
|
2018-06-26 00:39:27 +08:00
|
|
|
HIDDEN float __math_may_uflowf (uint32_t);
|
2018-07-03 19:54:36 +08:00
|
|
|
/* Division by zero. */
|
2018-06-26 00:39:27 +08:00
|
|
|
HIDDEN float __math_divzerof (uint32_t);
|
2018-07-03 19:54:36 +08:00
|
|
|
/* The result overflows. */
|
2018-06-23 01:12:26 +08:00
|
|
|
HIDDEN double __math_oflow (uint32_t);
|
2018-07-03 19:54:36 +08:00
|
|
|
/* The result underflows to 0 in nearest rounding mode. */
|
2018-06-23 01:12:26 +08:00
|
|
|
HIDDEN double __math_uflow (uint32_t);
|
2018-07-03 19:54:36 +08:00
|
|
|
/* The result underflows to 0 in some directed rounding mode only. */
|
2018-06-23 01:12:26 +08:00
|
|
|
HIDDEN double __math_may_uflow (uint32_t);
|
2018-07-03 19:54:36 +08:00
|
|
|
/* Division by zero. */
|
2018-06-23 01:12:26 +08:00
|
|
|
HIDDEN double __math_divzero (uint32_t);
|
2018-07-03 19:54:36 +08:00
|
|
|
|
2018-06-23 01:12:26 +08:00
|
|
|
/* Error handling using input checking. */
|
2018-07-03 19:54:36 +08:00
|
|
|
|
|
|
|
/* Invalid input unless it is a quiet NaN. */
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
HIDDEN float __math_invalidf (float);
|
2018-07-03 19:54:36 +08:00
|
|
|
/* Invalid input unless it is a quiet NaN. */
|
2018-06-23 01:12:26 +08:00
|
|
|
HIDDEN double __math_invalid (double);
|
2018-07-03 19:54:36 +08:00
|
|
|
|
2018-06-23 01:12:26 +08:00
|
|
|
/* Error handling using output checking, only for errno setting. */
|
2018-07-03 19:54:36 +08:00
|
|
|
|
|
|
|
/* Check if the result overflowed to infinity. */
|
2018-06-23 01:12:26 +08:00
|
|
|
HIDDEN double __math_check_oflow (double);
|
2018-07-03 19:54:36 +08:00
|
|
|
/* Check if the result underflowed to 0. */
|
2018-06-23 01:12:26 +08:00
|
|
|
HIDDEN double __math_check_uflow (double);
|
|
|
|
|
2018-07-03 19:54:36 +08:00
|
|
|
/* Check if the result overflowed to infinity. */
|
2018-06-23 01:12:26 +08:00
|
|
|
static inline double
|
|
|
|
check_oflow (double x)
|
|
|
|
{
|
|
|
|
return WANT_ERRNO ? __math_check_oflow (x) : x;
|
|
|
|
}
|
|
|
|
|
2018-07-03 19:54:36 +08:00
|
|
|
/* Check if the result underflowed to 0. */
|
2018-06-23 01:12:26 +08:00
|
|
|
static inline double
|
|
|
|
check_uflow (double x)
|
|
|
|
{
|
|
|
|
return WANT_ERRNO ? __math_check_uflow (x) : x;
|
|
|
|
}
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
|
|
|
|
/* Shared between expf, exp2f and powf. */
|
|
|
|
#define EXP2F_TABLE_BITS 5
|
|
|
|
#define EXP2F_POLY_ORDER 3
|
|
|
|
extern const struct exp2f_data
|
|
|
|
{
|
|
|
|
uint64_t tab[1 << EXP2F_TABLE_BITS];
|
|
|
|
double shift_scaled;
|
|
|
|
double poly[EXP2F_POLY_ORDER];
|
|
|
|
double shift;
|
|
|
|
double invln2_scaled;
|
|
|
|
double poly_scaled[EXP2F_POLY_ORDER];
|
|
|
|
} __exp2f_data HIDDEN;
|
|
|
|
|
|
|
|
#define LOGF_TABLE_BITS 4
|
|
|
|
#define LOGF_POLY_ORDER 4
|
|
|
|
extern const struct logf_data
|
|
|
|
{
|
|
|
|
struct
|
|
|
|
{
|
|
|
|
double invc, logc;
|
|
|
|
} tab[1 << LOGF_TABLE_BITS];
|
|
|
|
double ln2;
|
|
|
|
double poly[LOGF_POLY_ORDER - 1]; /* First order coefficient is 1. */
|
|
|
|
} __logf_data HIDDEN;
|
|
|
|
|
|
|
|
#define LOG2F_TABLE_BITS 4
|
|
|
|
#define LOG2F_POLY_ORDER 4
|
|
|
|
extern const struct log2f_data
|
|
|
|
{
|
|
|
|
struct
|
|
|
|
{
|
|
|
|
double invc, logc;
|
|
|
|
} tab[1 << LOG2F_TABLE_BITS];
|
|
|
|
double poly[LOG2F_POLY_ORDER];
|
|
|
|
} __log2f_data HIDDEN;
|
|
|
|
|
|
|
|
#define POWF_LOG2_TABLE_BITS 4
|
|
|
|
#define POWF_LOG2_POLY_ORDER 5
|
|
|
|
#if TOINT_INTRINSICS
|
|
|
|
# define POWF_SCALE_BITS EXP2F_TABLE_BITS
|
|
|
|
#else
|
|
|
|
# define POWF_SCALE_BITS 0
|
|
|
|
#endif
|
|
|
|
#define POWF_SCALE ((double) (1 << POWF_SCALE_BITS))
|
|
|
|
extern const struct powf_log2_data
|
|
|
|
{
|
|
|
|
struct
|
|
|
|
{
|
|
|
|
double invc, logc;
|
|
|
|
} tab[1 << POWF_LOG2_TABLE_BITS];
|
|
|
|
double poly[POWF_LOG2_POLY_ORDER];
|
|
|
|
} __powf_log2_data HIDDEN;
|
|
|
|
|
2018-06-23 01:12:26 +08:00
|
|
|
#define EXP_TABLE_BITS 7
|
|
|
|
#define EXP_POLY_ORDER 5
|
|
|
|
/* Use polynomial that is optimized for a wider input range. This may be
|
|
|
|
needed for good precision in non-nearest rounding and !TOINT_INTRINSICS. */
|
|
|
|
#define EXP_POLY_WIDE 0
|
|
|
|
/* Use close to nearest rounding toint when !TOINT_INTRINSICS. This may be
|
|
|
|
needed for good precision in non-nearest rouning and !EXP_POLY_WIDE. */
|
|
|
|
#define EXP_USE_TOINT_NARROW 0
|
|
|
|
#define EXP2_POLY_ORDER 5
|
|
|
|
#define EXP2_POLY_WIDE 0
|
2018-07-03 19:54:36 +08:00
|
|
|
extern const struct exp_data
|
|
|
|
{
|
2018-06-23 01:12:26 +08:00
|
|
|
double invln2N;
|
|
|
|
double shift;
|
|
|
|
double negln2hiN;
|
|
|
|
double negln2loN;
|
|
|
|
double poly[4]; /* Last four coefficients. */
|
|
|
|
double exp2_shift;
|
|
|
|
double exp2_poly[EXP2_POLY_ORDER];
|
|
|
|
uint64_t tab[2*(1 << EXP_TABLE_BITS)];
|
|
|
|
} __exp_data HIDDEN;
|
|
|
|
|
2018-06-26 22:27:50 +08:00
|
|
|
#define LOG_TABLE_BITS 7
|
|
|
|
#define LOG_POLY_ORDER 6
|
|
|
|
#define LOG_POLY1_ORDER 12
|
2018-07-03 19:54:36 +08:00
|
|
|
extern const struct log_data
|
|
|
|
{
|
2018-06-26 22:27:50 +08:00
|
|
|
double ln2hi;
|
|
|
|
double ln2lo;
|
|
|
|
double poly[LOG_POLY_ORDER - 1]; /* First coefficient is 1. */
|
|
|
|
double poly1[LOG_POLY1_ORDER - 1];
|
|
|
|
struct {double invc, logc;} tab[1 << LOG_TABLE_BITS];
|
2018-07-05 19:37:25 +08:00
|
|
|
#if !HAVE_FAST_FMA
|
2018-06-26 22:27:50 +08:00
|
|
|
struct {double chi, clo;} tab2[1 << LOG_TABLE_BITS];
|
|
|
|
#endif
|
|
|
|
} __log_data HIDDEN;
|
|
|
|
|
2018-06-26 23:06:54 +08:00
|
|
|
#define LOG2_TABLE_BITS 6
|
|
|
|
#define LOG2_POLY_ORDER 7
|
|
|
|
#define LOG2_POLY1_ORDER 11
|
2018-07-03 19:54:36 +08:00
|
|
|
extern const struct log2_data
|
|
|
|
{
|
2018-06-26 23:06:54 +08:00
|
|
|
double invln2hi;
|
|
|
|
double invln2lo;
|
|
|
|
double poly[LOG2_POLY_ORDER - 1];
|
|
|
|
double poly1[LOG2_POLY1_ORDER - 1];
|
|
|
|
struct {double invc, logc;} tab[1 << LOG2_TABLE_BITS];
|
2018-07-05 19:37:25 +08:00
|
|
|
#if !HAVE_FAST_FMA
|
2018-06-26 23:06:54 +08:00
|
|
|
struct {double chi, clo;} tab2[1 << LOG2_TABLE_BITS];
|
|
|
|
#endif
|
|
|
|
} __log2_data HIDDEN;
|
|
|
|
|
2018-06-26 23:25:12 +08:00
|
|
|
#define POW_LOG_TABLE_BITS 7
|
|
|
|
#define POW_LOG_POLY_ORDER 8
|
2018-07-03 19:54:36 +08:00
|
|
|
extern const struct pow_log_data
|
|
|
|
{
|
2018-06-26 23:25:12 +08:00
|
|
|
double ln2hi;
|
|
|
|
double ln2lo;
|
|
|
|
double poly[POW_LOG_POLY_ORDER - 1]; /* First coefficient is 1. */
|
|
|
|
/* Note: the pad field is unused, but allows slightly faster indexing. */
|
|
|
|
struct {double invc, pad, logc, logctail;} tab[1 << POW_LOG_TABLE_BITS];
|
|
|
|
} __pow_log_data HIDDEN;
|
|
|
|
|
New expf, exp2f, logf, log2f and powf implementations
Based on code from https://github.com/ARM-software/optimized-routines/
This patch adds a highly optimized generic implementation of expf,
exp2f, logf, log2f and powf. The new functions are not only
faster (6x for powf!), but are also smaller and more accurate.
In order to achieve this, the algorithm uses double precision
arithmetic for accuracy, avoids divisions and uses small table
lookups to minimize the polynomials. Special cases are handled
inline to avoid the unnecessary overhead of wrapper functions and
set errno to POSIX requirements.
The new functions are added under newlib/libm/common, but the old
implementations are kept (in newlib/libm/math) for non-IEEE or
pre-C99 systems. Targets can enable the new math code by defining
__OBSOLETE_MATH_DEFAULT to 0 in newlib/libc/include/machine/ieeefp.h,
users can override the default by defining __OBSOLETE_MATH.
Currently the new code is enabled for AArch64 and AArch32 with VFP.
Targets with a single precision FPU may still prefer the old
implementation.
libm.a size changes:
arm: -1692
arm/thumb/v7-a/nofp: -878
arm/thumb/v7-a+fp/hard: -864
arm/thumb/v7-a+fp/softfp: -908
aarch64: -1476
2017-05-25 23:41:38 +08:00
|
|
|
#endif
|