609d2b22af
Don't set SO_RCVBUF/SO_SNDBUF to fixed values, thus disabling autotuning. Patch modeled after a patch suggestion from Daniel Havey <dhavey@gmail.com> in https://cygwin.com/ml/cygwin-patches/2017-q1/msg00010.html: At Windows we love what you are doing with Cygwin. However, we have been getting reports from our hardware vendors that iperf is slow on Windows. Iperf is of course compiled against the cygwin1.dll and we believe we have traced the problem down to the function fdsock in net.cc. SO_RCVBUF and SO_SNDBUF are being manually set. The comments indicate that the idea was to increase the buffer size, but, this code must have been written long ago because Windows has used autotuning for a very long time now. Please do not manually set SO_RCVBUF or SO_SNDBUF as this will limit your internet speed. I am providing a patch, an STC and my cygcheck -svr output. Hope we can fix this. Please let me know if I can help further. Simple Test Case: I have a script that pings 4 times and then iperfs for 10 seconds to debit.k-net.fr With patch $ bash buffer_test.sh 178.250.209.22 usage: bash buffer_test.sh <iperf server name> Pinging 178.250.209.22 with 32 bytes of data: Reply from 178.250.209.22: bytes=32 time=167ms TTL=34 Reply from 178.250.209.22: bytes=32 time=173ms TTL=34 Reply from 178.250.209.22: bytes=32 time=173ms TTL=34 Reply from 178.250.209.22: bytes=32 time=169ms TTL=34 Ping statistics for 178.250.209.22: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 167ms, Maximum = 173ms, Average = 170ms ------------------------------------------------------------ Client connecting to 178.250.209.22, TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.137.196.108 port 58512 connected with 178.250.209.22 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 768 KBytes 6.29 Mbits/sec [ 3] 1.0- 2.0 sec 9.25 MBytes 77.6 Mbits/sec [ 3] 2.0- 3.0 sec 18.0 MBytes 151 Mbits/sec [ 3] 3.0- 4.0 sec 18.0 MBytes 151 Mbits/sec [ 3] 4.0- 5.0 sec 18.0 MBytes 151 Mbits/sec [ 3] 5.0- 6.0 sec 18.0 MBytes 151 Mbits/sec [ 3] 6.0- 7.0 sec 18.0 MBytes 151 Mbits/sec [ 3] 7.0- 8.0 sec 18.0 MBytes 151 Mbits/sec [ 3] 8.0- 9.0 sec 18.0 MBytes 151 Mbits/sec [ 3] 9.0-10.0 sec 18.0 MBytes 151 Mbits/sec [ 3] 0.0-10.0 sec 154 MBytes 129 Mbits/sec Without patch: dahavey@DMH-DESKTOP ~ $ bash buffer_test.sh 178.250.209.22 Pinging 178.250.209.22 with 32 bytes of data: Reply from 178.250.209.22: bytes=32 time=168ms TTL=34 Reply from 178.250.209.22: bytes=32 time=167ms TTL=34 Reply from 178.250.209.22: bytes=32 time=170ms TTL=34 Reply from 178.250.209.22: bytes=32 time=169ms TTL=34 Ping statistics for 178.250.209.22: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 167ms, Maximum = 170ms, Average = 168ms ------------------------------------------------------------ Client connecting to 178.250.209.22, TCP port 5001 TCP window size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 10.137.196.108 port 58443 connected with 178.250.209.22 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 512 KBytes 4.19 Mbits/sec [ 3] 1.0- 2.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 2.0- 3.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 3.0- 4.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 4.0- 5.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 5.0- 6.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 6.0- 7.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 7.0- 8.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 8.0- 9.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 9.0-10.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 0.0-10.1 sec 14.1 MBytes 11.7 Mbits/sec The output shows that the RTT from my machine to the iperf server is similar in both cases (about 170ms) however with the patch the throughput averages 129 Mbps while without the patch the throughput only averages 11.7 Mbps. If we calculate the maximum throughput using Bandwidth = Queue/RTT we get (212992 * 8)/0.170 = 10.0231 Mbps. This is just about what iperf is showing us without the patch since the buffer size is set to 212992 I believe that the buffer size is limiting the throughput. With the patch we have no buffer limitation (autotuning) and can develop the full potential bandwidth on the link. If you want to duplicate the STC you will have to find an iperf server (I found an extreme case) that has a large enough RTT distance from you and try a few times. I get varying results depending on Internet traffic but without the patch never exceed the limit caused by the buffering. Signed-off-by: Corinna Vinschen <corinna@vinschen.de> |
||
---|---|---|
.. | ||
CVSChangeLogs.old | ||
cygserver | ||
cygwin | ||
doc | ||
lsaauth | ||
testsuite | ||
utils | ||
CONTRIBUTORS | ||
COPYING | ||
COPYING.LIB | ||
CYGWIN_LICENSE | ||
Makefile.common | ||
Makefile.in | ||
README | ||
acinclude.m4 | ||
aclocal.m4 | ||
autogen.sh | ||
c++wrap | ||
ccwrap | ||
config.guess | ||
config.sub | ||
configure | ||
configure.ac | ||
configure.cygwin | ||
install-sh |
README
THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Cygwin documentation is available on the net at https://cygwin.com You might especially be interested in https://cygwin.com/faq/faq.html#faq.programming.building-cygwin