So far Cygwin was jumping through hoops to restrict memory
allocation to specific regions. With the advent of VirtualAlloc2
and MapViewOfFile3 (and it's NT counterpart NtMapViewOfSectionEx),
we can skip searching for free space in the specific regions
and just call these functions and let the OS do the job more
efficiently and less racy.
Use VirtualAlloc2 on W10 1803 and later in thread stack allocation.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
So far Cygwin was jumping through hoops to restrict memory
allocation to specific regions. With the advent of VirtualAlloc2
and MapViewOfFile3 (and it's NT counterpart NtMapViewOfSectionEx),
we can skip searching for free space in the specific regions
and just call these functions and let the OS do the job more
efficiently and less racy.
Use the new functions on W10 1803 and later in mmap.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Windows 10 1803 introduced an extended memory API allowing
to specify memory regions allocations are to be taken off.
In preparation of using this API, define the struct
MEM_EXTENDED_PARAMETER and friends. Declare and allow to
autoload the functions VirtualAlloc2 and NtMapViewOfSectionEx.
Introduce a wincap flag has_extended_mem_api.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Otherwise big stacks have a higher probability to collide with
randomized PEBs and TEBs after fork.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Unfortunately Windows doesn't understand WSL symlinks,
despite being a really easy job. NT functions trying
to access paths traversing WSL symlinks return the status
code STATUS_IO_REPARSE_TAG_NOT_HANDLED. Handle this
status code same as STATUS_OBJECT_PATH_NOT_FOUND in
symlink_info::check to align behaviour to traversing
paths with other non-NTFS type symlinks.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
The descriptions of symlink handling are a bit dated, so
revamp them and add the new WSL symlink type.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
WSL symlinks are reparse points containing a POSIX path in UTF-8.
On filesystems supporting reparse points, use this symlink type.
On other filesystems, or in case of error, fall back to the good
old plain SYSTEM file.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Commit 4a36897af3 allowed to convert /mnt/<drive> path
prefixes to Cygwin cygdrive prefixes on the fly. However,
the patch neglected WSL symlinks pointing to the /mnt
directory. Rearrange path conversion so /mnt is converted
to the cygdrive prefix path itself.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Treat WSL symlinks just like other symlinks. Convert
absolute paths pointing to Windows drives via
/mnt/<driveletter> to Windows-style paths <driveletter>:
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
The IEEE spec for pow only has special case for x**0 and 1**y when x/y
are quiet NaN. For signaling NaN, the general case applies and these functions
should signal the invalid exception and return a quiet NaN.
Signed-off-by: Keith Packard <keithp@keithp.com>
These functions shared a pattern of re-converting the argument to bits
when returning +/-0. Skip that as the initial conversion still has the
sign bit.
Signed-off-by: Keith Packard <keithp@keithp.com>
Recent GCC appears to elide multiplication by 1, which causes snan
parameters to be returned unchanged through *iptr. Use the existing
conversion of snan to qnan to also set the correct result in *iptr
instead.
Signed-off-by: Keith Packard <keithp@keithp.com>
This fix comes from glibc, from files which originated from
the same place as the newlib files. Those files in glibc carry
the same license as the newlib files.
Bug 14155 is spurious underflow exceptions from Bessel functions for
large arguments. (The correct results for large x are roughly
constant * sin or cos (x + constant) / sqrt (x), so no underflow
exceptions should occur based on the final result.)
There are various places underflows may occur in the intermediate
calculations that cause the failures listed in that bug. This patch
fixes problems for the double version where underflows occur in
calculating the intermediate functions P and Q (in particular, x**-12
gets computed while calculating Q). Appropriate approximations are
used for P and Q for arguments at least 0x1p28 and above to avoid the
underflows.
For sufficiently large x - 0x1p129 and above - the code already has a
cut-off to avoid calculating P and Q at all, which means the
approximations -0.125 / x and 0.375 / x can't themselves cause
underflows calculating Q. This cut-off is heuristically reasonable
for the point beyond which Q can be neglected (based on expecting
around 0x1p-64 to be the least absolute value of sin or cos for large
arguments representable in double).
The float versions use a cut-off 0x1p17, which is less heuristically
justifiable but should still only affect values near zeroes of the
Bessel functions where these implementations are intrinsically
inaccurate anyway (bugs 14469-14472), and should serve to avoid
underflows (the float underflow for jn in bug 14155 probably comes
from the recurrence to compute jn). ldbl-96 uses 0x1p129, which may
not really be enough heuristically (0x1p143 or so might be safer - 143
= 64 + 79, number of mantissa bits plus total number of significant
bits in representation) but again should avoid underflows and only
affect values where the code is substantially inaccurate anyway.
ldbl-128 and ldbl-128ibm share a completely different implementation
with no such cut-off, which I propose to fix separately.
Signed-off-by: Keith Packard <keithp@keithp.com>
This reverts commit 082f2513c7.
Turns out, Linux as well as BSD really only wait for the smaller
number, MIN or # of requested bytes.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Per termios, read waits for MIN chars even if the number of requested
bytes is less. This requires to add WaitCommEvent to wait non-busily
for MIN chars prior to calling ReadFile, so, reintroduce it.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Sharing the OVERLAPPED struct and event object in there between
read and select calls in the fhandler might have been a nice
optimization way back when, but it is a dangerous, not thread-safe
approach. Fix this by creating per-fhandler, per-call OVERLAPPED
structs and event objects on demand.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
- make sure event object is reset
- set read_ready to true if WaitCommEvent returns success
- improve debugging
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
tcsetattr checks if the VTIME and VMIN values changed and only
calls SetCommTimeouts if so. That's a problem if tcsetattr
is supposed to set VTIME and VIMN to 0, because these are the
start values anyway. But this requires to set ReadIntervalTimeout
to MAXDWORD, which just doesn't happen.
Fix this by dropping the over-optimization of checking the old
values before calling SetCommTimeouts,
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
After changing the type of fhandler_serial::vtime_ to cc_t, vtime_
must be stored in 10s of seconds, not in milliseconds.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Get rid of WaitCommEvent and using overlapped_armed to share the
same overlapped operation between read and select. Rather, make
sure to cancel the overlapped IO before leaving any of these functions.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
- Datatypes were incorrect, especially vmin_ and vtime_.
Change them to cc_t, as in user space.
- Error checking had a gap or two. Debug output used the
wrong formatting.
- Don't use ev member for ClearCommError and WaitCommEvent.
Both returned values are different (error value vs. event
code). The values are not used elsewhere so it doesn't make
sense to store them in the object. Therefore, drop ev member.
- Some variable names were not very helpful. Especially using
n as lpNumberOfBytesTransferred from GetOverlappedResult and
then actually printing it as if it makes sense was quite
puzzeling.
- Rework the loop and the definition of minchars so that it
still makes sense when looping.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
- Don't use ev member for ClearCommError and WaitCommEvent.
Both returned values are different (error value vs. event
code). The values are not used elsewhere so it doesn't make
sense to store them in the object.
- Drop local variable ready which is used inconsequentially.
- Since WFSO already waits 10 ms, don't wait again if no char
is in the inbound queue.
- Avoid else if chains.
- Only print one line of debug output on error.
- Drop overlapped_armed < 0 check. This value is only set in
fhandler_serial::raw_read if VTIME > 0, and even then it's only
set to be immediately reset to 0 before calling ReadFile. So
overlapped_armed is never actually < 0 when calling select.
- Fix a screwed up statement order.
Signed-off-by: Corinna Vinschen <corinna@vinschen.de>
Add the missing mask for the decomposition of hi+lo which caused some
errors of 1-2 ULP.
This change is taken over from FreeBSD:
95436ce20d
Additionally I've removed some variable assignments which were never
read before being overwritten again in the next 2 lines.
This fix for k_tan.c is a copy from fdlibm version 5.3 (see also
http://www.netlib.org/fdlibm/readme), adjusted to use the macros
available in newlib (SET_LOW_WORD).
This fix reduces the ULP error of the value shown in the fdlibm readme
(tan(1.7765241907548024E+269)) to 0.45 (thereby reducing the error by
1).
This issue only happens for large numbers that get reduced by the range
reduction to a value smaller in magnitude than 2^-28, that is also
reduced an uneven number of times. This seems rather unlikely given that
one ULP is (much) larger than 2^-28 for the values that may cause an
issue. Although given the sheer number of values a double can
represent, it is still possible that there are more affected values,
finding them however will be quite hard, if not impossible.
We also took a look at how another library (libm in FreeBSD) handles the
issue: In FreeBSD the complete if branch which checks for values smaller
than 2^-28 (or rather 2^-27, another change done by FreeBSD) is moved
out of the kernel function and into the external function. This means
that the value that gets checked for this condition is the unreduced
value. Therefore the input value which caused a problem in the
fdlibm/newlib kernel tan will run through the full polynomial, including
the careful calculation of -1/(x+r). So the difference is really whether
r or y is used. r = y + p with p being the result of the polynomial with
1/3*x^3 being the largest (and magnitude defining) value. With x being
<2^-27 we therefore know that p is smaller than y (y has to be at least
the size of the value of x last mantissa bit divided by 2, which is at
least x*2^-51 for doubles) by enough to warrant saying that r ~ y. So
we can conclude that the general implementation of this special case is
the same, FreeBSD simply has a different philosophy on when to handle
especially small numbers.
Make line 47 in sf_trunc.c reachable. While converting the double
precision function trunc to the single precision version truncf an error
was introduced into the special case. This special case is meant to
catch both NaNs and infinities, however qNaNs and infinities work just
fine with the simple return of x (line 51). The only error occurs for
sNaNs where the same sNaN is returned and no invalid exception is
raised.
The comparison c == FP_INFINITE causes the function to return +inf as it
expects x = +inf to always be larger than y. This shortcut causes
several issues as it also returns +inf for the following cases:
- fdim(+inf, +inf), expected (as per C99): +0.0
- fdim(-inf, any non NaN), expected: +0.0
I don't see a reason to keep the comparison as all the infinity cases
return the correct result using just the ternary operation.
While testing the exp function we noticed some errors at the specified
magnitude. Within this range the exp function returns the input value +1
as an output. We chose to run a test of 1m exponentially spaced values
in the ranges [-2^-27,-2^-32] and [2^-32,2^-27] which showed 7603 and
3912 results with an error of >=0.5 ULP (compared with MPFR in 128 bit)
with the highest being 0.56 ULP and 0.53 ULP.
It's easy to fix by changing the magnitude at which the input value +1
is returned from <2^-28 to <2^-32 and using the polynomial instead. This
reduces the number of results with an error of >=0.5 ULP to 485 and 479
in above tests, all of which are exactly 0.5 ULP.
As we were already checking on exp we also took a look at expf. For expf
the magnitude where the input value +1 is returned can be increased from
<2^-28 to <2^-23 without accuracy loss for a slight performance
improvement. To ensure this was the correct value we tested all values
in the ranges [-2^-17,-2^-28] and [2^-28,2^-17] (~92.3m values each).
Passing a pointer to a local variable to WriteConsoleA is
not actually needed if we're not going to do anything with
what WriteConsoleA would put in there.
For the wpbuf class the pointer argument was made optional,
so it can be just left out; other call places now pass a
NULL pointer instead. The local variables `wn' and `n'
are no unused, so they go away.
Replace direct access to a pair of co-dependent variables
by calls to methods of a class that encapsulates their relation.
Also replace C #define by C++ class constant.
The single-precision trigonometric functions show rather high errors in
specific ranges starting at about 30000 radians. For example the sinf
procedure produces an error of 7626.55 ULP with the input
5.195880078125e+04 (0x474AF6CD) (compared with MPFR in 128bit
precision). For the test we used 100k values evenly spaced in the range
of [30k, 70k]. The issues are periodic at higher ranges.
This error was introduced when the double precision range reduction was
first converted to float. The shift by 8 bits always returns 0 as iq is
never higher than 255.
The fix reduces the error of the example above to 0.45 ULP, highest
error within the test set fell to 1.31 ULP, which is not perfect, but
still a significant improvement. Testing other previously erroneous
ranges no longer show particularly large accuracy errors.