The !HAVE_FAST_FMA code path split r = z/c - 1 into r = rhi + rlo such
that when z = 1-tiny and c = 1 then rlo and rhi could have much larger
magnitude than r which later caused large rounding errors.
So do a nearest rounding instead of truncation at the split.
In newlib with default settings this was observable on some arm targets
that enable the new math code but has no fma.
The roundtoint and converttoint internal functions are only called with small
values, so 32 bit result is enough for converttoint and it is a signed int
conversion so the natural return type is int32_t.
The original idea was to help the compiler keeping the result in uint64_t,
then it's clear that no sign extension is needed and there is no accidental
undefined or implementation defined signed int arithmetics.
But it turns out gcc does a good job with inlining so changing the type has
no overhead and the semantics of the conversion is less surprising this way.
Since we want to allow the asuint64 (x + 0x1.8p52) style conversion, the top
bits were never usable and the existing code ensures that only the bottom
32 bits of the conversion result are used.
In newlib with default settings only aarch64 is affected and there is no
significant code generation change with gcc after the patch.
Synchronize code style and comments with Arm Optimized Routines, there
are no code changes in this patch. This ensures different projects using
the same code have consistent code style so bug fix patches can be applied
more easily.
This fix is for some platforms which do not have writev().
*perror.c: Use _write_r() instead of writev().
*psignal.c: Use write() insetad of writev().
Revise commit: d4f4e7ae1b
The new implementation is provided under !__OBSOLETE_MATH, it uses
ISO C99 code. With default settings the worst case error in nearest
rounding mode is 0.54 ULP with inlined fma and fma contraction. It uses
a 4 KB lookup table in addition to the table in exp_data.c, on aarch64
.text+.rodata size of libm.a is increased by 2295 bytes.
Improvements on Cortex-A72:
latency: 3.3x
thruput: 4.9x
The new implementation is provided under !__OBSOLETE_MATH, it uses
ISO C99 code. With default settings the worst case error in nearest
rounding mode is 0.547 ULP with inlined fma and fma contraction. It uses
a 1 KB lookup table, on aarch64 .text+.rodata size of libm.a is increased
by 1584 bytes.
Note that the math.h header defines log2(x) to be log(x)/Ln2, this is
not changed, so the new code is only used if that macro is suppressed.
Improvements on Cortex-A72:
latency: 2.0x
thruput: 2.2x
The new implementations are provided under !__OBSOLETE_MATH, it uses
ISO C99 code. With default settings the worst case error in nearest
rounding mode is 0.519 ULP with inlined fma and fma contraction. It uses
a 2 KB lookup table, on aarch64 .text+.rodata size of libm.a is increased
by 1703 bytes. The w_log.c wrapper is disabled since error handling is
inline in the new code.
New __HAVE_FAST_FMA and __HAVE_FAST_FMA_DEFAULT feature macros were
added to enable selecting between the code path that uses fma and the
one that does not. Targets supposed to set __HAVE_FAST_FMA_DEFAULT
if they have single instruction fma and the compiler can actually
inline it (gcc has __FP_FAST_FMA macro but that does not guarantee
inlining with -fno-builtin-fma).
Improvements on Cortex-A72:
latency: 1.9x
thruput: 2.3x
The new implementations are provided under !__OBSOLETE_MATH, they use
ISO C99 code. There are several settings, with the default one the
worst case error in nearest rounding mode is 0.509 ULP for exp and
0.507 ULP for exp2 when a multiply and add is contracted into an fma.
They use a shared 2 KB lookup table, on aarch64 .text+.rodata size
of libm.a is increased by 1868 bytes. The w_*.c wrappers are disabled
for the new code as it takes care of error handling inline.
The old exp2(x) code used to be just pow(2,x) so the speedup there
is more significant.
The file name has no special prefix to avoid any name collision with
existing files.
Improvements on Cortex-A72:
exp latency: 3.2x
exp thruput: 4.1x
exp2 latency: 7.8x
exp2 thruput: 18.8x
This change is equivalent to the commit
c65db17340
and only affects code that is from the Arm optimized-routines project.
It does not affect the observable behaviour, but the code generation
can be different on 64bit targets. The intention is to make the
portable semantics of the code obvious by using a fixed size type.
* (mkcategories): Fix a bug that outputs incorrect Unicode category
table for code point ranges.
* (categories.t): Rebuild it using the bug-fixed mkcategories.
This fixes the problem reported in the following post.
https://cygwin.com/ml/cygwin/2018-06/msg00248.html
Here is the correct patch with both filenames and int cast fixed:
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-05-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
This patch is a complete rewrite of sinf, cosf and sincosf. The new version
is significantly faster, as well as simple and accurate.
The worst-case ULP is 0.56072, maximum relative error is 0.5303p-23 over all
4 billion inputs. In non-nearest rounding modes the error is 1ULP.
The algorithm uses 3 main cases: small inputs which don't need argument
reduction, small inputs which need a simple range reduction and large inputs
requiring complex range reduction. The code uses approximate integer
comparisons to quickly decide between these cases - on some targets this may
be slow, so this can be configured to use floating point comparisons.
The small range reducer uses a single reduction step to handle values up to
120.0. It is fastest on targets which support inlined round instructions.
The large range reducer uses integer arithmetic for simplicity. It does a
32x96 bit multiply to compute a 64-bit modulo result. This is more than
accurate enough to handle the worst-case cancellation for values close to
an integer multiple of PI/4. It could be further optimized, however it is
already much faster than necessary.
Simple benchmark showing speedup factor on AArch64 for various ranges:
range 0.7853982 sinf 1.7 cosf 2.2 sincosf 2.8
range 1.570796 sinf 1.9 cosf 1.9 sincosf 2.7
range 3.141593 sinf 2.0 cosf 2.0 sincosf 3.5
range 6.283185 sinf 2.3 cosf 2.3 sincosf 4.2
range 125.6637 sinf 2.9 cosf 3.0 sincosf 5.1
range 1.1259e15 sinf 26.8 cosf 26.8 sincosf 45.2
ChangeLog:
2018-06-18 Wilco Dijkstra <wdijkstr@arm.com>
* newlib/libm/common/Makefile.in: Regenerated.
* newlib/libm/common/Makefile.am: Add sinf.c, cosf.c, sincosf.c
sincosf.h, sincosf_data.c. Add -fbuiltin -fno-math-errno to CFLAGS.
* newlib/libm/common/math_config.h: Add HAVE_FAST_ROUND, HAVE_FAST_LROUND,
roundtoint, converttoint, force_eval_float, force_eval_double, eval_as_float,
eval_as_double, likely, unlikely.
* newlib/libm/common/cosf.c: New file.
* newlib/libm/common/sinf.c: Likewise.
* newlib/libm/common/sincosf.h: Likewise.
* newlib/libm/common/sincosf.c: Likewise.
* newlib/libm/common/sincosf_data.c: Likewise.
* newlib/libm/math/sf_cos.c: Add #if to build conditionally.
* newlib/libm/math/sf_sin.c: Likewise.
* newlib/libm/math/wf_sincos.c: Likewise.
--
Previously, "test 1 2 3 -a -b -c" was permuted to "test -a -b -c 1 2 3",
but "test 1 2 3 -abc" was left as "test 1 2 3 -abc".
Signed-off-by: Thomas Kindler <mail+newlib@t-kindler.de>
- when calculating a correction to align next brk to page boundary,
ensure that the correction is less than a page size
- if allocating the correction fails, ensure that the top size is
set to brk + sbrk_size (minus any front alignment made)
Signed-off-by: Jeff Johnston <jjohnstn@redhat.com>
When converting number of days since epoch (32-bits) to seconds,
calculations using 32-bit `long` overflow for years above 2038. Solve
this by casting number of days to `time_t` just before final
multiplication.
Signed-off-by: Freddie Chopin <freddie.chopin@gmail.com>
- From: Cesar Philippidis <cesar@codesourcery.com>
Date: Tue, 10 Apr 2018 14:43:42 -0700
Subject: [PATCH] nvptx port
This port adds support for Nvidia GPU's, which are primarily used as
offload accelerators in OpenACC and OpenMP.
The gdtoa implementation uses the type long, defined as Long, in lots
of code. For historical reason newlib defines Long as int32_t instead.
This works fine, as long as floating point exceptions are not enabled.
The conversion to 32 bit int can lead to a FE_INVALID situation.
Example:
const char *str = "121645100408832000.0";
char *ptr;
feenableexcept (FE_INVALID);
strtod (str, &ptr);
This leads to the following situation in strtod
double aadj;
Long L;
[...]
L = (Long)aadj;
For instance, on x86_64 the code here is
cvttsd2si %xmm0,%eax
At this point, aadj is 2529648000.0 in our example. The conversion to
32 bit %eax results in a negative int value, thus the conversion is
invalid. With feenableexcept (FE_INVALID), a SIGFPE is raised.
Fix this by always using 64 bit ints here if double is not a 32 bit type
to avoid this type of FP exceptions.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Classical function call recursion wastes a lot of stack space.
Each recursion level requires a full stack frame comprising all
local variables and additional space as dictated by the
processor calling convention.
This implementation instead stores the variables that are unique
for each recursion level in a parameter stack array, and uses
iteration to emulate recursion. Function call recursion is not
used until the array is full.
To ensure the stack consumption isn't worsened by this design, the
size of the parameter stack array is chosen to be similar to the
stack frame excluding the array. Each function call recursion level
can handle 8 iterative recursion levels.
Stack consumption will worsen when sorting tiny arrays that do not
need recursion (of 6 elements or less). It will be about equal for
up to 15 elements, and be an improvement for larger arrays. The best
case improvement is a stack size reduction down to about one quarter
of the stack consumption before the change.
A design where the parameter stack array is large enough for the
worst case recursion level was rejected because it would worsen
the stack consumption when sorting arrays smaller than about 1500
elements. The worst case is 31 levels on a 32-bit system.
A design with a dynamic parameter array size was rejected because
of limitations in some compilers.
The qsort algorithm splits the input array in three parts. The
left and right parts may need further sorting. One of them is
sorted by recursion, the other by iteration. This update ensures
that it is the smaller part that is chosen for recursion.
By choosing the smaller part, each recursion level will handle
less than half the array of the previous recursion level. Hence
the recursion depth is bounded to be less than log2(n) i.e. 1
level per significant bit in the array size n.
The update also includes code comments explaining the algorithm.
Newlib has a build configuration where syscalls can be directly
embedded in the newlib library rather than relying on libgloss.
This configuration was broken recently by an update to the libgloss
support for Arm that was not propagated to the syscalls interface in
newlib itself. This patch restores the build. It's essentially a
copy of https://sourceware.org/ml/newlib/2018/msg00128.html but there
are some other minor cleanups and changes that I've made at the same
time. None of those cleanups affect functionality.
The prototypes of the following functions have been updated: _link,
_sbrk, _getpid, _write, _swiwrite, _lseek, _swilseek, _read and
_swiread.
Signed-off-by: Richard Earnshaw <Richard.Earnshaw@arm.com>
E.g. arm ABI requires -fshort-enums for bare-metal toolchains.
Given there are only 29 category enums, the compiler chooses an
8 bit enum type, so a size of 11 bits for the bitfield leads to
a compile time error:
error: width of 'cat' exceeds its type
enum category cat: 11;
^~~
Fix this by aligning the size of the category members to byte
borders.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Scripts do not try to acquire Unicode data by best-effort magic anymore.
Options supported:
-h for help
-i to copy Unicode data from /usr/share/unicode/ucd first
-u to download Unicode data from unicode.org first
If (despite of -i or -u if given) the necessary Unicode files are not
available locally, table generation is skipped, but no error code is
returned, so not to obstruct the build process if called from a Makefile.
E.g. arm ABI requires -fshort-enums for bare-metal toolchains.
Given there are only 29 category enums, the compiler chooses an
8 bit enum type, so a size of 11 bits for the bitfield leads to
a compile time error:
error: width of 'cat' exceeds its type
enum category cat: 11;
^~~
Fix this by aligning the size of the category members to byte
borders.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
touupper and toulower didn't return a value in all cases. Worse,
this only broke Cygwin when building without optimization for debug
purposes.
Why GCC neglects to notice this is a mystery.
While at it, fix formatting.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
The tow* functions use an included case conversion table which can be
generated from Unicode data.
The isw* functions use a character categories table (provided by
categories.c) which can be generated from Unicode data.
Delegation between current-locale and specific-locale-dependent functions
was reverted towards the generic locale-dependent functions (*_l.c);
this is however only relevant on systems with non-Unicode wide character
locales, thus not on Cygwin.
Table categories.t and tag enumeration categories.cat provide
character class data for most of the isw* functions.
These data are generated from Unicode data.
Linux and FreeBSD use int as well. In addition, this fixes an Ada
incompatiblity problem on 64-bit targets. See also GCC:
gcc/ada/libgnarl/s-osinte__rtems.ads
Signed-off-by: Sebastian Huber <sebastian.huber@embedded-brains.de>
Locale modifier @cjkwide makes Unicode "ambiguous width" characters
wide. So ambiguous width characters can be enforced to have width 2
even in non-CJK locales. This gives e.g. users of "Powerline symbols"
the opportunity to adjust their width to the desired behaviour (and the
behaviour apparently expected by some tools) without having to set a CJK
locale and without losing consistence of terminal character width with
wcwidth/wcswidth locale width.
At least with Binutils 2.30 and GCC 7.3 we need symbol definitions
without the leading underscore.
Signed-off-by: Sebastian Huber <sebastian.huber@embedded-brains.de>
This is a NetBSD-specific detail which does not apply to Newlib, causing
linking issues in certain scenarios:
https://cygwin.com/ml/cygwin/2018-01/msg00189.html
Signed-off-by: Yaakov Selkowitz <yselkowi@redhat.com>
New optimized powf, logf, log2f, expf and exp2f yield worse performance
on Arm targets with only single precision instructions because the
double precision arithmetic is then implemented via softfloat routines.
This patch uses the old implementation when double precision
instructions are not available on Arm targets.
Testing: Built newlib with GCC's rmprofile Arm multilibs and compared
before/after -> only the above functions are changed and calls to them
(name change from logf to __ieee754_logf and similar). Testing the
changed function on a panel of values yields the same result before the
original patches to improve them and after this one. Double checking the
performance by looping the same panel of values being tested on Arm
Cortex-M4 does show the performance regression is fixed.
This patch fixes a syntax error in exit.c that was introduced during the
ANSI-fication of newlib. The patch fixes a compile-time issue that arises when
newlib is configured with the --enable-lite-exit feature.
Code path for _MB_CAPABLE scans for the '%' character and advances
'fmt' pointer past '%'. Code path for !_MB_CAPABLE leaved fmt pointing
to '%', which caused the state machine to go from START to DONE state
immediately.
Neither upstream FreeBSD nor glibc ever call fflush from ftell
and friends. In border cases it has the tendency to return
wrong or unexpected values, for instance on block devices.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Make prototype of _kill() always visible when _COMPILING_NEWLIB is
defined. This makes <sys/signal.h> consistent with the use of
_COMPILING_NEWLIB in <sys/unistd.h>, <sys/times.h>, etc.
Updated patch to use 0.0f in addition to calling rintf.
Tested same way as before, with a testcase that triggers the code and
make check.
OK?
newlib/
* libm/math/wf_pow.c (powf): Call rintf instead of rint. Use 0.0f
for compare.
To update my email address to my current employer. Specifix died quite a while
ago, and I've had two jobs in the interim.
newlib/
* MAINTAINERS: Update my email address.
To follow up the thread starting at [1], since all uses of TRAD_SYNOPSIS
have been removed, and all uses of ANSI_SYNOPSIS have been renamed to
SYNOPSIS, we can now warn about the use of these.
[1] https://sourceware.org/ml/newlib/2017/msg01182.html
Signed-off-by: Jon Turney <jon.turney@dronecode.org.uk>
Discard QUICKREF sections, rather than writing them to stderr
Discard MATHREF sections, rather than discarding as an error
Pass NOTES sections through to texinfo, rather than discarding as an error
Don't redirect makedoc stderr to .ref file
Remove makedoc output on error
Remove .ref files from CLEANFILES
Regenerate Makefile.ins
Signed-off-by: Jon Turney <jon.turney@dronecode.org.uk>
Old BSD bug: While ^ is recognized and the set of matching characters
is negated, the code neglects to increment the pointer pointing to the
matching characters. Thus, on a negation expression like %[^xyz], the
matching doesn't only stop at x, y, or z, but incorrectly also on ^.
Fix this by setting the start pointer after recognizing the ^.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
The following functions are also guarded in glibc:
fwprintf, swprintf, wprintf, vfwprintf, vswprintf, vwprintf.
Signed-off-by: Yaakov Selkowitz <yselkowi@redhat.com>