Let automake manage whether the objects are included in lib.a. This
fixes failures after to commit 71086e8b2d
("newlib: delete (most) redundant lib_a_CCASFLAGS=$(AM_CCASFLAGS)") due
to automake generating different set of implicit rules, and the code in
here assuming the names of the generated objects.
This isn't strictly necessary, but it makes for much clearer logs as
to what the target is doing, and provides cache vars for anyone who
wants to force the test a different way, and it lets the build cache
its own results when rerunning config.status.
Restore the call to AC_NO_EXECUTABLES -- I naively assumed in commit
2e9aa5f56c ("newlib: update preprocessor
configure checks") that checking for a preprocessor would not involve
linking code. Unfortunately, autoconf will implicitly check that the
compiler "works" before allowing it to be used, and that involves a
link test, and that fails because newlib provides the C library which
is needed to pass a link test.
There is some code in NEWLIB_CONFIGURE specifically to help mitigate
these, but it's not kicking in here for some reason, so let's just add
the AC_NO_EXECUTABLES call back until we can unwind that custom logic.
Additionally, we have to call AC_PROG_CPP explicitly. This was being
invoked later on, but only in the use_libtool=yes codepath, and that
is almost never enabled.
When we had configure scripts in subdirs, the newlib_basedir value
was computed relative to that, and it'd be the same when used in the
Makefile in the same dir. With many subdir configure scripts removed,
the top-level configure & Makefile can't use the same relative path.
So switch the subdir Makefiles over to abs_newlib_basedir when they
use -I to find source headers.
Do this for all subdirs, even ones with configure scripts and where
newlib_basedir works. This makes the code consistent, and avoids
surprises if the configure script is ever removed in the future as
part of merging to the higher level.
Some of the subdirs were using -I$(newlib_basedir)/../newlib/ for
some reason. Collapse those too since newlib_basedir points to the
newlib source tree already.
When using the top-level configure script but subdir Makefiles, the
newlib_basedir value gets a bit out of sync: it's relative to where
configure lives, not where the Makefile lives. Move the abs setting
from the top-level configure script into acinclude.m4 so we can rely
on it being available everywhere. Although this commit doesn't use
it anywhere, just lays the groundwork.
The work to merge libc/machine/ up a dir lost the stub doc targets.
So when libc/ recursed into machine/, it would stop going deeper as
the doc rules were empty. But now that libc/ goes directly into the
libc/machine/$arch/ and those have never had doc stubs, the build
fails. Add a quick hack to the top dir to ignore all machine/$arch/
dirs when generating docs. A follow up series will delete all of
this code as it merges all the doc rules into the top newlib dir.
The machine configure scripts are all effectively stub scripts that
pass the higher level options to its own makefile. There were only
three doing custom tests. The rest were all effectively the same as
the libc/ configure script.
So instead of recursively running configure in all of these subdirs,
generate their makefiles from the top-level configure. For the few
unique ones, deploy a pattern of including subdir logic via m4:
m4_include([machine/nds32/acinclude.m4])
Some of the generated machine makefiles have a bunch of extra stuff
added to them, but that's because they were inconsistent in their
configure libtool calls. The top-level has it, so it exports some
new vars to the ones that weren't already.
The sys configure scripts are almost all effectively stub scripts that
pass the higher level options to its own makefile. The phoenix & linux
ones are a bit more complicated with nested subdirs, so those have been
left alone for now. Plus, I don't really have a way of testing them.
There's no need to have a sys/ subdir just to copy the sys/$arch/crt0.o
up to sys/crt0.o, and then have libc/ copy sys/crt0.o up again. Just
have libc/ refer to sys/$arch/crt0.o directly and drop the intermediate
makefile entirely.
The sys/{configure,Makefile} files exist to fan out to the specific
sys/$arch/ subdir, and to possibly generate a crt0. We already have
all that same info in the libc/ dir itself, so by moving the recursive
configure and make calls into it, we can cut off some of this logic
entirely and save the overhead.
For arches that don't have a sys subdir, it means they can skip the
logic entirely.
The sys subdir itself is kept for the crt0 logic, for now. We'll try
and clean that up next.
The machine/{configure,Makefile} files exist only to fan out to the
specific machine/$arch/ subdir. We already have all that same info
in the libc/ dir itself, so by moving the recursive configure and
make calls into it, we can cut off this logic entirely and save the
overhead.
For arches that don't have a machine subdir, it means they can skip
the logic entirely. Although there's prob not too many of those.
This makes the makefile logic a bit cleaner so we don't have two
files maintaining lists of sources & objects. Since the logic is
tied to cpu capabilities, past those boolean settings down from
the configure logic to the makefile logic.
This will also make it easier to throw away the configure script
in a follow up commit and just keep the makefile.
The nds32 & spu dirs are using compile tests to look for some
preprocessor defines, but we don't need to compile the code,
just preprocess it. So switch to AC_PREPROC_IFELSE.
The sh dir is using a preprocessor test via grep, but let's
switch it to AC_PREPROC_IFELSE too to be consistent.
This should allow us to drop the uncommon AC_NO_EXECUTABLES call.
This was added decades ago, but the commit message lacks any
explanation, and it was unused when it was merged. It's still
unused today. So punt it all.
Generating these files is very cheap, so let's just do it all the time.
This makes the build logic simpler, and keeps errors for slipping in in
codepaths that are not well tested. Creating these files doesn't mean
they'll be included in the manual implicitly.
For example, some of the nano stdio files break documentation because
they don't have any chew directives in them. But no one noticed since
that code path is rarely enabled. So drop the _i and _float def files.
It's unclear why this was added originally, but assuming it was needed
20 years ago, it shouldn't be explicitly required nowadays. Current
versions of autotools already take care of exporting LDFLAGS to the
Makefile as needed (things are actually getting linked). That's why
the configure diffs show LDFLAGS still here, but shifted to a diff
place in the output list. A few dirs stop exporting LDFLAGS, but
that's because they don't do any linking, only compiling, so it's
correct.
As for the use of $ldflags instead of the standard $LDFLAGS, I can't
really explain that at all. Just use the right name so users don't
have to dig into why their setting isn't respected, and then use a
non-standard name instead. Adjust the testsuite to match.
Now that we require a recent version of autoconf, we can rely on this
macro working. This change was already made to libm, but these other
dirs were missed as I didn't notice it being duplicated in 3 places.
The list of iconv to/from defines is hand maintained in newlib.hin.
Lets leverage mkdeps.pl to generate this list automatically from the
list of known encodings. The newlib.hin list is up-to-date, so the
list in iconv.m4 matches the list already generated.
This define is only used by newlib internally, so stop exporting it
as HAVE_INITFINI_ARRAY since this can conflict with defines packages
use themselves.
We don't really need to add _ to HAVE_INIT_FINI too since it isn't
exported in newlib.h, but might as well be consistent here.
We can't (easily) add this to newlib_cflags like HAVE_INIT_FINI is
because this is based on a compile-time test in the top configure,
not on plain shell code in configure.host. We'd have to replicate
the test in every subdir in order to have it passed down.
Currently this is only enabled in the top-level as that's the only
place where it seemed to be used. But the libc/sys/phoenix/ dir
also uses this functionality, but fails to explicitly enable it.
Automake workedaround it, but generated warnings. Move the option
to NEWLIB_CONFIGURE so all dirs get it automatically iff they end
up using the option. If they don't use the option, there's no
difference to the generated code.
Since AM_INIT_AUTOMAKE calls AC_PROG_AWK, and some configure.ac
scripts call it too, we end up testing for awk multiple times. If
we change NEWLIB_CONFIGURE to require the macro instead, then it
makes sure it's always expanded, but only once.
While we're here, do the same thing with AC_PROG_INSTALL since it
is also called by AM_INIT_AUTOMAKE, although it doesn't currently
result in duplicate configure checks.
The AC_LIBTOOL_WIN32_DLL macro has been deprecated for a while and code
should call LT_INIT with win32-dll instead. Update the calls to match.
The generated code is noisy not because of substantial differences, but
because the order of some macros change (i.e. instead of calling AS and
then CC, CC is called first and then AS).
Since automake already sets per-library CCASFLAGS to $(AM_CCASFLAGS)
by default, there's no need to explicitly set it here.
Many of these dirs don't have .S files in the first place, so the rule
doesn't even do anything. That can easily be seen when Makefile.in has
no changes as a result.
For the dirs with .S files, the custom rules are the same as the pattern
.S.o rules, so this is a nice cleanup.
The only dir that was adding extra flags (newlib/libc/machine/mn10300/)
to the per-library setting can have it moved to the global AM_CCASFLAGS
since the subdir only has one target. Although the setting just adds
extra debugging flags, so maybe it should be deleted in general.
There are a few dirs that we leave the redundant setting in place. This
is to workaround an automake limitation in subdirs that support building
with & w/out libtool:
https://www.gnu.org/software/automake/manual/html_node/Objects-created-both-with-libtool-and-without.html
This matches what the other GNU toolchain projects have done already.
The generated diff in practice isn't terribly large. This will allow
more use of subdir local.mk includes due to fixes & improvements that
came after the 1.11 release series.
The newlib & libgloss dirs are already generated using autoconf-2.69.
To avoid merging new code and/or accidental regeneration using diff
versions, leverage config/override.m4 to pin to 2.69 exactly. This
matches what gcc/binutils/gdb are already doing.
The README file already says to use autoconf-2.69.
To accomplish this, it's just as simple as adding -I flags to the
top-level config/ dir when running aclocal. This is because the
override.m4 file overrides AC_INIT to first require the specific
autoconf version before calling the real AC_INIT.
Libtool needs to get BSD-format (or MS-format) output out of the system
nm, so that it can scan generated object files for symbol names for
-export-symbols-regex support. Some nms need specific flags to turn on
BSD-formatted output, so libtool checks for this in its AC_PATH_NM.
Unfortunately the code to do this has a pair of interlocking flaws:
- it runs the test by doing an nm of /dev/null. Some platforms
reasonably refuse to do an nm on a device file, but before now this
has only been worked around by assuming that the error message has a
specific textual form emitted by Tru64 nm, and that getting this
error means this is Tru64 nm and that nm -B would work to produce
BSD-format output, even though the test never actually got anything
but an error message out of nm -B. This is fixable by nm'ing *nm
itself* (since we necessarily have a path to it).
- the test is entirely skipped if NM is set in the environment, on the
grounds that the user has overridden the test: but the user cannot
reasonably be expected to know that libtool wants not only nm but
also flags forcing BSD-format output. Worse yet, one such "user" is
the top-level Cygnus configure script, which neither tests for
nor specifies any BSD-format flags. So platforms needing BSD-format
flags always fail to set them when run in a Cygnus tree, breaking
-export-symbols-regex on such platforms. Libtool also needs to
augment $LD on some platforms, but this is done unconditionally,
augmenting whatever the user specified: the nm check should do the
same.
One wrinkle: if the user has overridden $NM, a path might have been
provided: so we use the user-specified path if there was one, and
otherwise do the path search as usual. (If the nm specified doesn't
work, this might lead to a few extra pointless path searches -- but
the test is going to fail anyway, so that's not a problem.)
(Tested with NM unset, and set to nm, /usr/bin/nm, my-nm where my-nm is a
symlink to /usr/bin/nm on the PATH, and /not-on-the-path/my-nm where
*that* is a symlink to /usr/bin/nm.)
ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
PR libctf/27967
* libtool.m4 (LT_PATH_NM): Try BSDization flags with a user-provided
NM, if there is one. Run nm on itself, not on /dev/null, to avoid
errors from nms that refuse to work on non-regular files. Remove
other workarounds for this problem. Strip out blank lines from the
nm output.
This reports common symbols like GNU nm, via a type code of 'C'.
ChangeLog
2021-09-27 Nick Alcock <nick.alcock@oracle.com>
PR libctf/27967
* libtool.m4 (lt_cv_sys_global_symbol_pipe): Augment symcode for
Solaris 11.
AR from older binutils doesn't work with --plugin and rc:
[hjl@gnu-cfl-2 bin]$ touch foo.c
[hjl@gnu-cfl-2 bin]$ ar --plugin /usr/libexec/gcc/x86_64-redhat-linux/10/liblto_plugin.so rc libfoo.a foo.c
[hjl@gnu-cfl-2 bin]$ ./ar --plugin /usr/libexec/gcc/x86_64-redhat-linux/10/liblto_plugin.so rc libfoo.a foo.c
./ar: no operation specified
[hjl@gnu-cfl-2 bin]$ ./ar --version
GNU ar (Linux/GNU Binutils) 2.29.51.0.1.20180112
Copyright (C) 2018 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms of
the GNU General Public License version 3 or (at your option) any later version.
This program has absolutely no warranty.
[hjl@gnu-cfl-2 bin]$
Check if AR works with --plugin and rc before passing --plugin to AR and
RANLIB.
PR ld/27173
* configure: Regenerated.
* libtool.m4 (_LT_CMD_OLD_ARCHIVE): Check if AR works with
--plugin and rc before enabling --plugin.
config/
PR ld/27173
* gcc-plugin.m4 (GCC_PLUGIN_OPTION): Check if AR works with
--plugin and rc before enabling --plugin.
libiberty/
PR ld/27173
* configure: Regenerated.
zlib/
PR ld/27173
* configure: Regenerated.
The configure scripts were regenerated with 2.69 for the newlib-4.2.0
release in 484d2ebf8d, but the aclocal
files were not. Do that now to avoid confusion between the two as to
which version of autoconf was used.
32 bit Cygwin still exports function calls to support old applications.
E. g., when switching from 16 to 32 bit uid/gid values, new function
like getuid32 have been added and the old getuid function still only
provides 16 bit values. Newly built applications using getuid are
actually calling getuid32.
However, this link magic isn't performed inside Cygwin itself, so if
newlib functions call getuid, they actually call the old getuid, not
the new getuid32. This leads to truncated uid/gid values.
https://cygwin.com/pipermail/cygwin/2022-January/250453.html reports
how this leads to problems in posix_spawn.
Fix this temporarily. i686 support will go away soon in Cygwin and the
fix can be dropped.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
For some RTEMS multilibs, the FPU and Altivec units are disabled during
interrupt handling. Do not save and restore the corresponding registers in
this case.
Since automake deprecated the INCLUDES name in favor of AM_CPPFLAGS,
change all existing users over. The generated code is the same since
the two variables have been used in the same exact places by design.
There are other cleanups to be done, but lets focus on just renaming
here so we can upgrade to a newer automake version w/out triggering
new warnings.
The 'cygnus' option was removed from automake 1.13 in 2012, so the
presence of this option prevents that or a later version of automake
being used.
A check-list of the effects of '--cygnus' from the automake 1.12
documentation, and steps taken (where possible) to preserve those
effects (See also this thread [1] for discussion on that):
[1] https://lists.gnu.org/archive/html/bug-automake/2012-03/msg00048.html
1. The foreign strictness is implied.
Already present in AM_INIT_AUTOMAKE in newlib/acinclude.m4
2. The options no-installinfo, no-dependencies and no-dist are implied.
Already present in AM_INIT_AUTOMAKE in newlib/acinclude.m4
Future work: Remove no-dependencies and any explicit header dependencies,
and use automatic dependency tracking instead. Are there explicit rules
which are now redundant to removing no-installinfo and no-dist?
3. The macro AM_MAINTAINER_MODE is required.
Already present in newlib/acinclude.m4
Note that maintainer-mode is still disabled by default.
4. Info files are always created in the build directory, and not in the
source directory.
This appears to be an error in the automake documentation describing
'--cygnus' [2]. newlib's info files are generated in the source
directory, and no special steps are needed to keep doing that.
[2] https://lists.gnu.org/archive/html/bug-automake/2012-04/msg00028.html
5. texinfo.tex is not required if a Texinfo source file is specified.
(The assumption is that the file will be supplied, but in a place that
automake cannot find.)
This effect is overriden by an explicit setting of the TEXINFO_TEX
variable (the directory part of which is fed into texi2X via the
TEXINPUTS environment variable).
6. Certain tools will be searched for in the build tree as well as in the
user's PATH. These tools are runtest, expect, makeinfo and texi2dvi.
For obscure automake reasons, this effect of '--cygnus' is not active
for makeinfo in newlib's configury.
However, there appears to be top-level configury which selects in-tree
runtest, expect and makeinfo, if present. So, if that works as it
appears, this effect is preserved. If not, this may cause problem if
anyone is building those tools in-tree.
This effect is not preserved for texi2dvi. This may cause problems if
anyone is building texinfo in-tree.
If needed, explicit checks for those tools looking in places relative to
$(top_srcdir)/../ as well as in PATH could be added.
7. The check target doesn't depend on all.
This effect is not preseved. The check target now depends on the all
target.
This concern seems somewhat academic given the current state of the
testsuite.
Also note that this doesn't touch libgloss.
Add all the effects of 'cygnus' for which there exists an explicit way
to request that behaviour:
* Implied foreign strictness and options no-installinfo, no-dependencies
and no-dist are added to AM_INIT_AUTOMAKE in newlib/acinclude.m4.
* macro AM_MAINTAINER_MODE is added to newlib/acinclude.m4.
* For the implied TEXINFO_TEX of '$(top_srcdir)/../texinfo/texinfo.tex',
an explicit TEXINFO_TEX is always relative to $(srcdir), so write the
same pathname in that form.
This is to prepare for the removal of the automake option '--cygnus'.
- This patch uses gdtoa imported from OpenBSD if newlib configure
option "--enable-newlib-use-gdtoa=no" is NOT specified. gdtoa
provides more accurate output and faster conversion than legacy
ldtoa, while it requires more heap memory.
This patch fixed a problem which isn't in newlib, but in projects
incorrectly using symbols from the reserved namespace.
This reverts commit 3ba1bd0d9d.
- Currently, printf("%La\n", 1e1000L) crashes with segv due to lack
of frexpl() function. With this patch, frexpl() function has been
implemented in libm to solve this issue.
Addresses: https://sourceware.org/pipermail/newlib/2021/018718.html
- If the number has large integer part and small fraction part is
specified in output format, e.g. printf("%.3f", sqrt(2)*1e60);,
valid output digits were insufficient. This patch fixes the issue.
https://cygwin.com/pipermail/cygwin/2021-November/249930.html
reported a regression introduce by using a dynamically sized local
char array in favor of a statically sized array.
Fix this by reverting to a statically sized array, using a small
buffer on the stack for a reasonable number of requested digits, a
big mallocated buffer otherwise. This should work for small targets
as well, given that malloc is used in printf anyway right now.
This is *still* hopefully just a temporary measure, unless somebody
actually provides a new ldtoa.
Fixes: 4d90e53359 ("ldtoa: fix dropping too many digits from output")
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Use the same name as glibc & gnulib to indicate "newlib itself is
being compiled". This also harmonizes the codebase a bit in that
_LIBC was already used in places instead of _COMPILING_NEWLIB.
Building for bfin-elf, mips-elf, and x86_64-pc-cygwin produces
the same object code.
Some distros enable _FORTIFY_SOURCE by default which upsets building
newlib which itself implements the logic for this define. For example,
building gets.c fails because the includes set up a gets() macro which
expands in the definition.
Since newlib isn't prepared to build itself with _FORTIFY_SOURCE, and
it's not clear if it's even useful, ignore it when building the code.
This also matches what glibc is doing.
The _COMPILING_NEWLIB symbol is for declaring "the code is being
compiled for newlib itself" so headers can change behavior vs the
header being used by users (who should get the normal clean API).
Unfortunately, this symbol is defined inconsistently leading to it
only being useful for a few subsections of the tree.
Pull it out so that it's defined all the time for all targets.
We're seeing a build failure in GNU sim code which is using _P locally
but the ctype.h define clashes with it. Rename these to use the same
symbols that glibc does. They're a bit more verbose, but seems likely
that we'll have fewer conflicts if glibc isn't seeing them.
However, these shortnames are still used internally by ctype modules
to produce pretty concise source code, so move the short names to the
internal ctype_.h where short name conflicts shouldn't show up.
This code looks like it's written to be copied & pasted between diff
C libraries and relies on _LIBC only being used with glibc. This will
break when newlib changes from _COMPILING_NEWLIB to _LIBC, so delete
the glibc-specific logic ahead of time.
ldtoa cuts the number of digits it returns based on a computation of
number of supported bits (144) divide by log10(2). Not only is the
integer approximation of log10(2) ~= 8/27 missing a digit here, it
also fails to take really small double and long double values into
account.
Allow for the full potential precision of long double values. At the
same time, change the local string array allocation to request only as
much bytes as necessary to support the caller-requested number of
digits, to keep the stack size low on small targets.
In the long run a better fix would be to switch to gdtoa, as the BSD
variants, as well as Mingw64 do.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Currently, newlib does not declare strsignal if DEFS_H is defined,
ostensibly to work around a gdb bug. However, gdb itself compiles
even with this ifndef removed, and this makes sim (another part of
gdb) fail to compile.
Since it is not clear exactly what issue this was working around,
this patch just replaces that ifdef with the correct check,
i.e. __POSIX_VISIBLE >= 200809.
Reported by prodisDown:
In picolibc/newlib/libc/string/strrchr.c
if (i) { while ((s=strchr(s, i))) { last = s; s++; } } else { last = strchr(s, i); }
Value (for example 0xFFFFFF00) in if (i) can pass test and
then be typecasted to char inside strchr(). Then s++ and then
buffer overrun.
It can be fixed by preventive typecast i = (int) (char) i; or
typecasting inside expression if ((char) i).
Fixed by casting to char.
Signed-off-by: Keith Packard <keithp@keithp.com>
Add specialized rotations RB_RED_ROTATE_LEFT() and RB_RED_ROTATE_RIGHT() which
may be used if we rotate a red child which has a black sibling. Such a red
node must have at least two child nodes so that the following red-black tree
invariant is fulfilled:
Every path from a given node to any of its descendant NULL nodes goes through
the same number of black nodes.
PARENT
/ \
BLACK RED
/ \
BLACK BLACK
Add specialized rotations RB_PARENT_ROTATE_LEFT() and RB_PARENT_ROTATE_RIGHT()
which may be used if the parent node exists and the direction of the child is
known. The specialized rotations are derived from RB_ROTATE_LEFT() and
RB_ROTATE_RIGHT() where the RB_SWAP_CHILD() was replaced by a simple
assignment.
In RB_GENERATE_REMOVE_COLOR() simplify a chain of conditions of the following
pattern
if (x) {
...
} else if (!x) {
...
}
to
if (x) {
...
} else {
...
}
We have
#define RB_ISRED(elm, field) \
((elm) != NULL && RB_COLOR(elm, field) == RB_RED)
So, the RB_ISRED() contains an implicit check for NULL. In
RB_GENERATE_REMOVE_COLOR() the "elm" pointer cannot be NULL in the while
condition. Use RB_COLOR(elm) == RB_BLACK instead.
When newlib is configured with --enable-newlib-reent-check-verify,
the assert macro is already defined in the nano-mallocr.c compile unit.
Contributed by STMicroelectronics
Signed-off-by: Torbjörn SVENSSON <torbjorn.svensson@st.com>
unused function warning for two_way_short_needle,
different char type warnings for standard string functions
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
_STDIO_WITH_THREAD_CANCELLATION_SUPPORT was never defined.
Include ../stdio/local.h to get the right definition per target.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
__STDINT_EXP() is provided by newlib but not by stdint-gcc.h. stdint-gcc.h
is used when the GCC argument -ffreestanding is used and this results in this
file not compiling.
This patch to the libc/machine/nvptx port of newlib implements an
approximation of "clock" and provides some additional stub routines.
These changes not only reduce the number of (link) failures in the GCC
testsuite when targeting nvptx-none, but also allow the NIST scimark4
benchmark to compile and run without modification.
newlib already contains support for backends to provide their own
clock implementations via -DCLOCK_PROVIDED. That functionality is
used here to return an approximate elapsed time based on the NVidia
GPU's clock64 cycle counter. Although not great, this is better than
the current behaviour of link error from the unresolved symbol
_times_r.
The other part of the patch is to add a small number of stub functions
to nvptx's misc.c. Adding isatty, for example, resolves linking
problems in libc from the dependency in __smakebuf_r, and the sync
stub, for example, fixes the failure with GCC's
testsuite/gfortran.dg/ISO_Fortran_binding_14.f90 [which simply tests
that gfortran can call a/any C function].
newlib/
configure.host: Add -DCLOCK_PROVIDED to newlib_cflags on nvptx*.
newlib/libc/machine/nvptx
Makefile.am: Add clock.c to lib_a_SOURCES.
clock.c: New source file to implement/approximate clock().
misc.c: Add stubs for fstat, isatty, open, sync and unlink.
_strtod_l as well as the gethex function both fetch the decimal point
from the current LC_NUMERIC locale info. This pulls in _C_numeric_locale
unconditionally even on targets not supporting locales at all.
Another problem is that strtod.c and gdtoa-gethex.c are ELIX 1, while
locale information in general isn't. This leads to potential build
breakage on bare metal targets.
Fix this by setting the decimal point to "." on all targets not
defining __HAVE_LOCALE_INFO__.
While at it, const'ify the entire local decimal point info in the
affected functions.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
svfwscanf replaces getwc and ungetwc_r. The comments in the code talk
about avoiding file operations, but they also need to bypass the
mbtowc calls as svfwscanf operates on wchar_t, not multibyte data,
which is a more important reason here; they would not work correctly
otherwise.
The ungetwc replacement has code which uses the 3 byte FILE _ubuf
field, but if wchar_t is 32-bits, this field is not large enough to
hold even one wchar_t value. Building in this mode generates warnings
about array overflow:
In file included from ../../newlib/libc/stdio/svfiwscanf.c:35:
../../newlib/libc/stdio/vfwscanf.c: In function '_sungetwc_r.isra':
../../newlib/libc/stdio/vfwscanf.c:316:12: warning: array subscript 4294967295 is above array bounds of 'unsigned char[3]' [-Warray-bounds]
316 | fp->_p = &fp->_ubuf[sizeof (fp->_ubuf) - sizeof (wchar_t)];
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from ../../newlib/libc/stdio/stdio.h:46,
from ../../newlib/libc/stdio/vfwscanf.c:82,
from ../../newlib/libc/stdio/svfiwscanf.c:35:
../../newlib/libc/include/sys/reent.h:216:17: note: while referencing '_ubuf'
216 | unsigned char _ubuf[3]; /* guarantee an ungetc() buffer */
| ^~~~~
However, the vfwscanf code *never* ungets data before the start of the
scanning operation, and *always* ungets data which matches the input
at that point, so the code always hits the block which backs up over
the input data and never hits the block which uses the _ubuf field.
In addition, the svfwscanf code will always start with the unget
buffer empty, so the ungetwc replacement never needs to support an
unget buffer at all.
Simplify the code by removing support for everything other than
backing up over the input data, leaving the check to make sure it
doesn't get underflowed in case the vfscanf code has a bug in it.
Signed-off-by: Keith Packard <keithp@keithp.com>
Added function prototypes to newlib/libc/include/pthread.h
for the following Issue 8 Standard APIs:
pthread_cond_clockwait()
pthread_mutex_clocklock()
pthread_rwlock_clockrdlock()
pthread_rwlock_clockwrlock()
A recent patch introduced new code for sig2str/str2sig.
This code does not properly exclude code that requires
SIGRTMIN/SIGRTMAX to be defined and triggers the following
compile error:
newlib/libc/signal/sig2str.c:199:8: error: 'SIGRTMIN' undeclared
newlib/libc/signal/sig2str.c:200:29: error: 'SIGRTMAX' undeclared
Let's add the missing guards.
Fixes: 2b50ec0cd2 ("libc: Fix compilation for new sig2str/str2sig implementation")
Signed-off-by: Christoph Muellner <cmuellner@gcc.gnu.org>
Added implementations for sig2str() and str2sig() in libc/signal
in order to improve POSIX compliance. Added fucntion prototypes
in libc/include/sys/signal.h.
riscv64-unknown-elf-g++-11.1.0 regression suite reports the following
failures for
$ make check-gcc-c++ RUNTESTFLAGS='dg.exp=Wstringop-overflow-6.C'
```
FAIL: g++.dg/warn/Wstringop-overflow-6.C -std=gnu++14 (test for excess errors)
FAIL: g++.dg/warn/Wstringop-overflow-6.C -std=gnu++17 (test for excess errors)
FAIL: g++.dg/warn/Wstringop-overflow-6.C -std=gnu++2a (test for excess errors)
UNSUPPORTED: g++.dg/warn/Wstringop-overflow-6.C -std=gnu++98
```
The "excess errors" being
```
output is In file included from /home/maxim/prj/riscv-upstream/install/riscv64-unknown-elf/include/wchar.h:6,
from /home/maxim/prj/riscv-upstream/build/gcc-stage2/riscv64-unknown-elf/libstdc++-v3/include/cwchar:44,
from /home/maxim/prj/riscv-upstream/build/gcc-stage2/riscv64-unknown-elf/libstdc++-v3/include/bits/postypes.h:40,
from /home/maxim/prj/riscv-upstream/build/gcc-stage2/riscv64-unknown-elf/libstdc++-v3/include/iosfwd:40,
from /home/maxim/prj/riscv-upstream/build/gcc-stage2/riscv64-unknown-elf/libstdc++-v3/include/ios:38,
from /home/maxim/prj/riscv-upstream/build/gcc-stage2/riscv64-unknown-elf/libstdc++-v3/include/ostream:38,
from /home/maxim/prj/riscv-upstream/build/gcc-stage2/riscv64-unknown-elf/libstdc++-v3/include/iostream:39,
from /home/maxim/prj/riscv-upstream/gcc-11.1.0/gcc/testsuite/g++.dg/warn/Wstringop-overflow-6.C:6:
/home/maxim/prj/riscv-upstream/install/riscv64-unknown-elf/include/sys/reent.h:685:11: warning: unnecessary parentheses in declaration of '_sig_func' [-Wparentheses]
```
- GCC will set __FLT_EVAL_METHOD__ to 16 if __fp16 supported, e.g.
cortex-a55/aarch64.
- $ aarch64-unknown-elf-gcc -v 2>&1 |grep version
gcc version 9.2.0 (GCC)
- $ aarch64-unknown-elf-gcc -E -dM -mcpu=cortex-a55 - < /dev/null |grep FLT_EVAL_METHOD
#define __FLT_EVAL_METHOD__ 16
#define __FLT_EVAL_METHOD_TS_18661_3__ 16
#define __FLT_EVAL_METHOD_C99__ 16
- The behavior of __FLT_EVAL_METHOD__ == 16 is same as
__FLT_EVAL_METHOD__ == 0 except for float16_t, but newlib didn't
support float16_t.
ISO/IEC TS 18661-3:
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2405.pdf
V2 Changes:
- List Howland, Craig D as co-author since he provide the draft of comment
in math.h.
Co-authored-by: "Howland, Craig D" <howland@LGSInnovations.com>
The C standard says that errno may acquire the value ERANGE if the
result from strtod underflows. According to IEEE 754, underflow occurs
whenever the value cannot be represented in normalized form.
Newlib is inconsistent in this, setting errno to ERANGE only if the
value underflows to zero, but not for denorm values, and never for hex
format floats.
This patch attempts to consistently set errno to ERANGE for all
'underflow' conditions, which is to say all values which are not
exactly zero and which cannot be represented in normalized form.
This matches glibc behavior, as well as the Linux, Mac OS X, OpenBSD,
FreeBSD and SunOS strtod man pages.
Signed-off-by: Keith Packard <keithp@keithp.com>
The scanf code was skipping the '0' after the 'x' causing the
resulting buffer to contain an invalid number when passed to strtod.
Signed-off-by: Keith Packard <keithp@keithp.com>
Newlib for aarch64 uses libgloss for the backend. One common libgloss
implementation is the 'rdimon' implementation, which uses the Arm
Semihosting protocol. In order to support a remote host that runs on
Windows we need to know whether a file is to be opened in binary or
text mode. That means that we need to preserve this information via
O_BINARY until we know what the libgloss binding will be.
This patch simply copies the arm implementation from sys/arm/sys and
puts it in machine/aarch64/sys, because we don't have a 'sys' subtree
on aarch64.
The only reason why it is tough for us to use nano malloc
is because of the small shortcoming where nano_malloc()
splits a bigger chunk from the free list into two pieces
while handing back the second one (the tail) to the user.
This is error prone and especially bad for smaller heaps,
where nano malloc is supposed to be superior. The normal
malloc doesn't have this issue and we need to use it even
though it costs us ~2k bytes compared to nano-malloc.
The problem arise especially after giving back _every_
malloced memory to the heap and then starting to exercise
the heap again by allocating something small. This small
item might split the whole heap in two equally big parts
depending on how the heap has been exercised before.
I have uploaded the smallest possible application
(only tested on ST and Nordic devices) to show the issue
while the real customer applications are far more complicated:
https://drive.google.com/file/d/1kfSC2KOm3Os3mI7EBd-U0j63qVs8xMbt/view?usp=sharing
The application works like the following pseudo code,
where we assume a heap of 100 bytes
(I haven't taken padding and other nitty and gritty
details into account. Everything to simplify understanding):
void *ptr = malloc(52); // We get 52 bytes and we have
// 48 bytes to use.
free(ptr); // Hand back the 52 bytes to nano_malloc
// This is the magic line that shows the issue of
// nano_malloc
ptr = malloc(1); // Nano malloc will split the 52 bytes
// in the free list and hand you a pointer
// somewhere in the
// middle of the heap.
ptr2 = malloc(52); // Out of memory...
I have done a fix which hands back the first part of the
splitted chunk. Once this is fixed we obviously
have the 1 byte placed in position 0 of the heap instead
of somewhere in the middle.
However, this won't let us malloc 52 new bytes even though
we potentially have 99 bytes left to use in the heap. The
reason is that when we try to do the allocation,
nano-malloc looks into the free list and sees a 51 byte
chunk to be used.
This is not big enough so nano-malloc decides to call
sbrk for _another_ 52 bytes which is not possible since
there is only 48 bytes left to ask for.
The solution for this problem is to check if the last
item in the free list is adjacent to sbrk(0). If it is,
as it is in this case, we can just ask sbrk for the
remainder of what is needed. In this case 1 byte.
NB! I have only tested the solution on our ST device.
Use the more official fesetenv(FE_DFL_ENV) from _dll_crt0, thus
allowing to drop the _feinitialise declaration from fenv.h.
Provide a no-op _feinitialise in Cygwin as exportable symbol for really
old applications when _feinitialise was called from mainCRTStartup in
crt0.o.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Drop the Cygwin-specific fenv.cc and fenv.h file and use the equivalent
newlib functionality now, so we have at least one example of a user for
this new mechanism.
fenv.c: allow _feinitialise to be called from Cygwin startup code
fenv.h: add declarations for fegetprec and fesetprec for Cygwin only.
Fix a comment.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>