Tcp Auto Tuning Windows 10

-->

To enable the Receive Window Auto-Tuning feature for HTTP traffic, you must edit the registry. To do this, follow these steps: Click Start, type regedit in the Start Search box, and then press ENTER. Locate and then right-click the registry subkey HKEYLOCALMACHINE Software Microsoft Windows CurrentVersion Internet Settings WinHttp. Procedure for raising network limits for Windows XP (and Windows 2000) The easiest way to tune TCP under Windows XP (and many earlier versions of windows) is to get DrTCP from “DSL Reports” download page. Set the “Tcp receive window” to your computed BDP (e.g. 400000), turn on “Window Scaling” and “Selective Acks”. Windows Auto-Tuning is a feature introduced in Windows Vista and still in use on Windows 10. Windows Auto-Tuning was designed to automatically improve the performance for programs that receive TCP data over a network. Windows Auto-Tuning should be enabled and left alone unless you have a router, WI. Window Auto-Tuning feature is said to improve the performance for programs that receive TCP data over a network. It is nothing new. It was introduced in Windows Vista and is present in Windows 10 too. In today’s Internet, the range of latencies & throughput speeds is just too large to manage statically. Window Auto-Tuning feature is said to improve the performance for programs that receive TCP data over a network. It is nothing new. It was introduced in Windows Vista and is present in Windows 10.

This article describes how the Receive Window Auto-Tuning feature improves data transfer, how to enable/diable this feature for HTTP traffic on Windows Vista-based computers, and issues that may occur after you enable this feature for HTTP traffic.

Original product version: Windows Vista
Original KB number: 947239

Introduction

Windows Vista includes the Receive Window Auto-Tuning feature that improves performance for programs that receive TCP data over a network. However, this feature is disabled by default for programs that use the Windows HTTP Services (WinHTTP) interface. Some examples of programs that use WinHTTP include Automatic Updates, Windows Update, Remote Desktop Connection, Windows Explorer (network file copy), and Sharepoint (WebDAV).

If you enable Receive Window Auto-Tuning for WinHTTP traffic, data transfers over the network may be more efficient. However, in some cases you might experience slower data transfers or loss of connectivity if your network uses an older router and firewall that does not support this feature. For example, when you use Windows Internet Explorer to access applications that are hosted in Microsoft Office SharePoint Server, the HTTP traffic may slow down. This occurs because certain routers do not support the Receive Window Auto-Tuning feature.

Note

Since the release of Windows 7, Receive Window Auto-Tuning is now available for programs that use the Windows Internet (WinINet) application programming interface (API) for HTTP requests instead of WinHTTP. Some examples of programs that use WinINet for HTTP traffic include Internet Explorer, Outlook, and Outlook Express.

How Receive Window Auto-Tuning feature improves data transfer

The Receive Window Auto-Tuning feature lets the operating system continually monitor routing conditions such as bandwidth, network delay, and application delay. Therefore, the operating system can configure connections by scaling the TCP receive window to maximize the network performance. To determine the optimal receive window size, the Receive Window Auto-Tuning feature measures the products that delay bandwidth and the application retrieve rates. Then, the Receive Window Auto-Tuning feature adapts the receive window size of the ongoing transmission to take advantage of any unused bandwidth.

Enable Receive Window Auto-Tuning feature for WinHTTP traffic

Note

Prerequisites: You must be running Windows Vista Service Pack 2 or Windows Vista Service Pack 1, or have hotfix 939006 installed to enable auto-tuning for WinHTTP.

Important

Tcp Auto Tuning Windows 10 Upgrade

This section, method, or task contains steps that tell you how to modify the registry. However, serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, click the following article number to view the article in the Microsoft Knowledge Base: 322756 How to back up and restore the registry in Windows
To enable the Receive Window Auto-Tuning feature for HTTP traffic, you must edit the registry. To do this, follow these steps:

  1. Click Start, type regedit in the Start Search box, and then press ENTER.
  2. Locate and then right-click the registry subkey HKEY_LOCAL_MACHINESoftwareMicrosoftWindowsCurrentVersionInternet SettingsWinHttp.
  3. Point to New, and then click DWORD Value.
  4. Type TcpAutotuning, and then press ENTER.
  5. Right-click TcpAutotuning, and then click Modify.
  6. In the Value data box, type 1, and then click OK.
  7. Exit Registry Editor.
  8. Restart the computer.

The Receive Window Auto-Tuning feature is enabled for HTTP traffic if the TcpAutotuning registry entry is set to 1. The Receive Window Auto-Tuning feature is not enabled for HTTP traffic if the TcpAutotuning registry entry does not exist or if it is set to a value that is not 1.

To enable the Windows Internet (WinINet) in Windows 7, follow these steps:

  1. Click Start, type regedit in the Search programs and files box, and then press ENTER.

  2. Locate and then right-click the registry subkey HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionInternet Settings.

  3. Point to New, and then click DWORD Value.

  4. Type TcpAutotuning, and then press ENTER.

  5. Right-click TcpAutotuning, and then click Modify.

  6. In the Value data box, type 1, and then click OK.

  7. Repeat step 2 through step 6 to add a TcpAutotuning entry with DWORD value of 1 under the following registry subkey:

    HKEY_LOCAL_MACHINESOFTWAREWow6432NodeMicrosoftWindowsCurrentVersionInternet Settings

  8. Exit Registry Editor.

  9. Restart the computer.

WinINet is enabled if the TcpAutotuning registry entries are set to 1. WinINet is not enabled if the TcpAutotuning registry entries do not exist or if they are set to a value that is not 1.

Check whether the problem is fixed. If the problem is fixed, you are finished with this article. If the problem is not fixed, you can contact support.

Issues that may occur after you enable the Receive Window Auto-Tuning feature for HTTP traffic

When the Receive Window Auto-Tuning feature is enabled for HTTP traffic, older routers, older firewalls, and older operating systems that are incompatible with the Receive Window Auto-Tuning feature may sometimes cause slow data transfer or a loss of connectivity. When this occurs, users may experience slow performance. Or, the applications may crash. These older devices do not comply with the RFC 1323 standard. Some device manufacturers provide software that works around the hardware limitations. Contact the device manufacturer to determine whether this kind of software is available.

If the incompatible devices are outside your organization, and you cannot change the devices, this issue will remain. Therefore, you may have to disable the Receive Window Auto-Tuning feature for HTTP traffic.

Disable the Receive Window Auto-Tuning feature

To disable the Receive Window Auto-Tuning feature for HTTP traffic, follow these steps:

  1. Log on to the computer as a user who has administrative credentials.

  2. Click Start, type runas /user: local_computer_name administrator cmd in the Start Search box, and then press ENTER.

  3. When you are prompted for the administrator account password, type the correct password, and then press ENTER.

  4. At the command prompt, type the following command, and then press ENTER:

  5. Exit the Command Prompt window.

  6. Restart the computer.

Check whether the problem is fixed. If the problem is fixed, you are finished with this article. If the problem is not fixed, you can contact support.

Enabling High Performance Data Transfers

System Specific Notes for System Administrators (and Privileged Users)

On this page:

These notes are intended to help users and system administrators maximize TCP/IP performance on their computer systems. They summarize all of the end-system (computer system) network tuning issues including a tutorial on TCP tuning, easy configuration checks for non-experts, and a repository of operating system specific instructions for getting the best possible network performance on these platforms

This material is currently under active revision. Please send any suggestions, additions or corrections to us at nettune@psc.edu so we can keep the information here as up-to-date as possible.

  • Tutorial
    • Bandwidth*Delay Products (BDP)
    • Buffers
    • Computing the BDP
  • High Performance Networking Options
    • Maximum Buffer Sizes on the host
    • Socket Buffers sizes
    • Negotiating TCP Large Windows options (RFC1323)
    • TCP Selective Acknowledgments (SACK, RFC2018)
    • Path MTU
  • Using Web Based Network Diagnostic Servers
    • Quick Link: NPAD Pathdiag end system and last mile diagnostics at PSC
    • Quick Link: NPAD Documentation and additional servers.
    • Quick Link: SYN Test Server
    • Quick Link: ORNL Bandwidth Tester
    • Quick Link: ANL Network Diagnostic Tool
  • Detailed Procedures for System tuning and Raising network limits for:

Introduction

Today, the majority of university users have physical network connections that are at least 100 megabits per second all the way through the Internet to every important data center in the world (as well as to every other university user). For many users, that connection might be 1 gigabit per second or faster. In some countries (e.g. Korea and Japan) the same statement applies to every home connection as well: 100 Mb/s from home to all important web servers, data centers and to each other.

To put these data rates into perspective, consider this: 100 Mb/s is more than 10 megabytes in one second, or 600 megabytes (an entire CD-R image) in one minute. Clearly very few people see these data rates. However, some experts can get very high data rates (for example see the Land Speed Records). Why? The biggest strength of the Internet is the way in which the TCP/IP “hourglass” hides the details of the network from the application and vice versa. An unfortunate but direct consequence of the hourglass is that it also hides all flaws everywhere. Network performance debugging (often euphemistically called “TCP tuning”) is extremely difficult because nearly all flaws have exactly same symptom: reduced performance. For example insufficient TCP buffer space is indistinguishable from excess packet loss (silently repaired by TCP retransmissions) because both flaws just slow the application, without any specific identifying symptoms.

Flaws fall into three broad areas: the applications themselves, the computer system (including the operating system and TCP tuning) and the network path. Each of these areas requires a very different approach to performance debugging. This page is focused on helping users and system administrators optimize the TCP/IP on their computer systems.

  • Applications sometimes perform poorly on long paths (even when the network is perfect) because they are not designed to fully overlap the speed of light delay to deliver the data with the processing at the end systems. It is quite difficult to write complicated applications that do this overlap properly, but it must be done for an application to perform well on a long network path. We are have developed some tools and documentation to help users and application developers to test and debug applications under these conditions.

    For example secure shell and secure copy (ssh and scp) implement internal flow control using an application level mechanism that severely limits the amount of data in the network, greatly reducing the performance all but the shortest paths. PSC is now supporting a patch to ssh and scp that updates the application flow control window from the kernel buffer size. With this patch, the TCP tuning directions on this page can alleviate the dominant bottlenecks in scp. In most environments scp will run at full link rate or the CPU limit for the chosen encryption.

  • Network paths can be very hard to debug because TCP’s ability to compensate for flaws is inversely proportional to the round trip time (RTT). So for example a flaw that will cause an application to take an extra second on a 1 millisecond path will generally cause the same application to take an extra 10 seconds on a 10 millisecond path. This “symptom scaling” effect arises because TCP’s ability to compensate for flaws is metered in round trips: if a given flaw is compensated in 50 round trips (typical for losses on a medium speed link), then a single loss affects a 1 ms path for only 50 ms, and a 10 ms path for 500 ms. Symptom scaling makes diagnosis particularly difficult, because flaws that are complete show stoppers on long paths are often be undetectable on short paths.

    We have a new tool, pathdiag, which compensates for symptom scaling so that it can detect previously undetectable flaws in a local network. The basic approach is to measure the properties of a short section of the path, and extrapolate the results as though the path was extended to the full RTT with an ideal network. It then uses a TCP performance model to predict if the resulting path would perform well enough to meet the application requirements.

    Pathdiag also does a complete check of the TCP options described on this page since it requires a well-tuned TCP implementation at the far end of the path. If it is available to you it is both the easiest to use and the most accurate test available. Since pathdiag requires a very short RTT the closest pathdiag server might not be close enough to you, in which case the SYN test server described below can also verify most TCP configuration options with one click.

The objectives of this page are to summarize all of the end system network tuning issues, provide easy configuration checks for non-experts, and maintain a repository of operating system specific advice and information about getting the best possible network performance on these platforms.

In the Tutorial we will briefly explain the issues and define some terms. Under High Performance Networking Options we describe each of the optional TCP features may have to configured without addressing the details of any specific operating system. The section, “Detailed Procedures“, provides step-by-step directions on making the necessary changes for several operating systems.

Note that today most TCP implementations are pretty good. The primary flaws are default configurations that are ideal for Local Area Networks (LANs) and Internet back roads: many millions of relatively low speed home users.

Tutorial

The dominant protocol used on the Internet today is TCP, a “reliable” “window-based” protocol. The best possible network performance is achieved when the network pipe between the sender and the receiver is kept full of data.

Bandwidth*Delay Products (BDP)

The amount of data that can be in transit in the network, termed “Bandwidth-Delay-Product,” or BDP for short, is simply the product of the bottleneck link bandwidth and the Round Trip Time (RTT). BDP is a simple but important concept in a window based protocol such as TCP. Some of the issues discussed below arise because of the fact that the BDP of today’s networks has increased way beyond what it was when the TCP/IP protocols were initially designed. In order to accommodate the large increases in BDP, some high performance extensions have been proposed and implemented in the TCP protocol. But these high performance options are sometimes not enabled by default and will have to be explicitly turned on by the system administrators.

Buffers

In a “reliable” protocol such as TCP, the importance of BDP described above is that this is the amount of buffering is required in the end hosts (sender and receiver). The largest buffer the original TCP (without the high performance options) supports is limited to 64K Bytes. If the BDP is small either because the link is slow or because the RTT is small (in a LAN, for example), the default configuration is usually adequate. But for a paths that have a large BDP, and hence require large buffers, it is necessary to have the high performance options discussed in the next section be enabled.

Computing the BDP

To compute the BDP, we need to know the speed of the slowest link in the path and the Round Trip Time (RTT).

The peak bandwidth of a link is typically expressed in Mbit/s (or more recently in Gbit/s). The round-trip delay (RTT) for wide area links is typically between 1 msec and 100 msec, which can be measured with ping or traceroute

As an example, for two hosts with GigE cards, communicating across a coast-to-coast link over an Abilene, the bottleneck link will be the GigE card itself. The actual round trip time (RTT) can be measured using ping, but we will use 70 msec in this example.

Auto Tuning Video

Knowing the bottleneck link speed and the RTT, the BDP can be calculated as follows:

1,000,000,000 bits

1 second

*1 Byte

8 bits

*0.07 seconds = 8,750,000 bytes = 8.75 Mbytes

Based on these calculations, it is easy to see why the typical default buffer size of 64 KBytes would be completely inadequate for this connection. With 64 KBytes you would get only 0.1% of the available bandwidth.

The next section presents a brief overview of the high performance options. Specific details on how to enable these options in various operating systems is provided in a later section.

High Performance Networking Options

The options below are presented in the order that they should be checked and adjusted.

  1. Maximum TCP Buffer (Memory) space: All operating systems have some global mechanism to limit the amount of system memory that can be used by any one TCP connection.

    On some systems, each connection is subject to a memory limit that is applied to the total memory used for input data, output data and control structures. On other systems, there are separate limits for input and output buffer space for each connection.

    Today almost all systems are shipped with Maximum Buffer Space limits that are far too small for nearly all of today’s Internet. Furthermore the procedures for adjusting the memory limits are different on every operating system. You must follow the appropriate detailed procedures below, which generally require privileges on multi-user systems.

  2. Socket Buffer Sizes: Most operating systems also support separate per connection send and receive buffer limits that can be adjusted by the user, application or other mechanism as long as they stay within the maximum memory limits above. These buffer sizes correspond to the SO_SNDBUF and SO_RCVBUF options of the BSD setsockopt() call.

    The socket buffers must be large enough to hold a full BDP of TCP data plus some operating system specific overhead. They also determine the Receiver Window (rwnd), used to implement flow control between the two ends of the TCP connection. There are several methods that can be used to adjust socket buffer sizes:

    1. TCP Autotuning automatically adjusts socket buffer sizes as needed to optimally balance TCP performance and memory usage. Autotuning is based on an experimental implementation for NetBSD by Jeff Semke, and further developed by Wu Feng’s DRS and the Web100 Project. Autotuning is now enabled by default in current Linux releases (after 2.6.6 and 2.4.16). It has also been announced for Windows Vista and Longhorn. In the future, we hope to see all TCP implementations support autotuning with appropriate defaults for other options, making this website largely obsolete.
    2. The default socket buffer sizes can generally be set with global controls. These default sizes are used for all socket buffer sizes that are not set in some other way. For single user systems, manually adjusting the default buffer sizes is the easiest way to tune arbitrary applications. Again, there is not standard method to do this, you must refer to the detailed procedures below.
    3. Since over buffering can cause some applications to behave poorly (typically causing sluggish interactive response) and risk running the system out of memory, large default socket buffers have to be considered carefully on multi-user systems. We generally recommend default socket buffer sizes that are slightly larger than 64 kBytes, which is still too small for optimal bulk transfer performance in most environments. It has the advantage of easing some of the difficulties debugging the TCP Window scale option (see below), without causing problem due to over buffering interactive applications.
    4. For customs applications, the programmer can choose the socket buffer sizes using a setsockopt() system call. A Detailed Programmers Guide by Von Welch at NCSA describes how to set socket buffer sizes within network applications.
    5. Some common applications include built in switches or commands to permit the user to manually set socket buffer sizes. The most common examples include iperf(a network diagnostic), many ftp variants (including gridftp) and other bulk data copy tools. Check the documentation on your system to see what is available.
    6. This approach forces the user to manually compute the BDP for the path and supply the proper command or option to the application.
    7. There has been some work on autotuning within the applications themselves. This approach is easier to deploy than kernel modifications and frees the user from having to compute the BDP, but the application is hampered by having limited access to the kernel resources it needs to monitor and tune.
    8. NLANR/DAST has an FTP client which automatically sets the socket buffer size to the measure bandwidth*delay product for the path. This client can be found athttp://dast.nlanr.net/Projects/Autobuf/
    9. NLANR/NCNE maintains a tool repository which includes application enhancements for several versions of FTP and rsh. Also included on this site is the nettune library for performing such enhancements yourself.
  3. TCP Large Window Extensions (RFC1323): These enable optional TCP protocol features (window scale and time stamps) which are required to support large BDP paths.
    • The window scale option (WSCALE) is the most important RFC1323 feature, and can be quite tricky to get correct. Window scale provides a scale factor which is required for TCP to support window sizes that are larger than 64k Bytes. Most systems automatically request WSCALE under some conditions, such as when the receive socket buffer is larger than 64k Bytes or when the other end of the TCP connection requests it first. WSCALE can only be negotiated at the very start of a connection. If either end fails to request WSCALE or requests an insufficient value, it cannot be renegotiated later during the same connection. Although different systems use different algorithms to select WSCALE they are all generally functions of the maximum permitted buffer size, the current receiver buffer size for this connection, or in some cases a global system setting.

      Note that under these constraints (which are common to many platforms), a client application wishing to send data at high rates may need to set its own receive buffer to something larger than 64k Bytes before it opens the connection to ensure that the server properly negotiates WSCALE.

      A few systems require a system administrator to explicitly enable RFC1323 extensions. If system cannot (or does not) negotiate WSCALE, it cannot support TCP window sizes (BDP) larger than 64k Bytes.

      Another RFC1323 feature is the TCP Timestamp option which provides better measurement of the Round Trip Time and protects TCP from data corruption that might occur if packets are delivered so late that the sequence numbers wrap before they are delivered. Wrapped sequence numbers do not pose a serious risk below 100 Mb/s, but the risk becomes progressively larger as the data rates get higher.

      Due to the improved RTT estimation, many systems use timestamps even a low rates.

  4. TCP Selective Acknowledgments Option (SACK, RFC2018) allow a TCP receiver inform the sender exactly which data is missing and needs to be retransmitted.

    Without SACK TCP has to estimate which data is missing, which works just fine if all losses are isolated (only one loss in any given round trip). Without SACK, TCP often takes a very long time to recover following a cluster of losses, which is the normal case for a large BDP path with even minor congestion. SACK is now supported by most operating systems, but it may have to be explicitly turned on by the system administrator.

    If you have a system that does not support SACK you can often raise TCP performance by slightly starving it for socket buffer space, The buffer starvation prevents TCP from being able to drive the path into congestion, and minimize the chances of causing clustered losses.

    Additional information on commercial and experimental implementations of SACK is available at http://www.psc.edu/networking/projects/sack/.

  5. Path MTU The host system must use the largest possible MTU for the path. This may require enabling Path MTU Discovery (RFC1191, RFC1981, RFC4821).

    Since RFC1191 is flawed it is sometimes not enabled by default and may need to be explicitly enabled by the system administrator. RFC4821 describes a new, more robust algorithm for MTU discovery and ICMP black hole recovery. See our page on jumbo MTUs for more information.

    The Path MTU Discovery server (described in a little more detail in the next section) may be useful in checking out the largest MTU supported by some paths.

Note that both ends of a TCP connection must be properly tuned independently, before it will support high speed transfers.

Using Web Based Network Diagnostic Servers

Most tuning problems (and many other network problems) can be diagnosed by with a single test from an appropriate diagnostic server. There are several different servers that test various aspects of the end-system and network path.

  • NPAD Diagnostic server (Pathdiag): A new experimental service that can provide one click diagnosis of most end-system and last mile network problems. It will directly diagnose and recommend corrective action for most of the tuning problems listed here. If you just want to test-end system problems, you can use the diagnostic server at PSC. If you also want to test your local network, you should go to the main NPAD page and choose the nearest server.

  • Internet 2 NDT servers: This is a web based java server that does a couple of short network tests and provides diagnostic information about the host and path, similar to NPAD/pathdiag. This server is particularly helpful in troubleshooting duplex mismatch problems. It also includes an automatic redirection service to find the closest server within Internet2 The project URL: http://e2epi.internet2.edu/ndt/

  • SYN Test Server: Often users would like a quick way to check if the high performance networking options discussed above (SACK etc.) are turned on. This is a Web100 based server that allows any client on the Internet to do this quickly and easily. It can be accessed either from a text window or from a web browser. In text mode, one can simply type:

    Alternatively, one can point a web browser on the host to the following URL:http://syntest.psc.edu:7961

  • ORNL TCP Bandwidth Tester: The Internet2 testers are patterned after an earlier server at ORNL. This is a web based java server that does a couple of short network tests and measures bandwidth between the host and the server. Simply visit the following URL: http://www.epm.ornl.gov/~dunigan/java/misc/tcpbw.html

  • Path MTU Discovery Server: This is a web based server you can use to determine the largest MTU supported between the server at Pittsburgh Supercomputing Center and the host you are interested in:

This tool shows the path and the largest MTU supported on that path from a server at the Pittsburgh Supercomputing Center to the host in question.

Detailed procedures for system tuning under various operating systems

See the specific instructions for each system:

Note that the instructions below only indicate that they have been tested for specific OS versions. However, most OS vendors rarely make significant changes to their TCP/IP stacks, so these directions are often correct for many versions before or after the stated version. If you find that you need to tweak our directions (especially for newer OS versions), please let us know at nettune@psc.edu“>nettune@psc.edu.

Procedure for raising network limits under FreeBSD

All system parameters can be read or set with ‘sysctl’. E.g.:

You can raise the maximum socket buffer size by, for example:

FreeBSD 7.0 implements automatic receive and send buffer tuning which are enabled by default. The default maximum value is 256KB which is likely too small. These should likely be increased, e.g. with follows:

You can also set the TCP and UDP default buffer sizes using the variables

When using larger socket buffers, you probably need to make sure that the TCP window scaling option is enabled. (The default is not enabled!) Check ‘tcp_extensions=”YES”‘ in /etc/rc.conf and ensure it’s enabled via the sysctl variable:

FreeBSD’s TCP has a thing called “inflight limiting” turned on by default, which can be detrimental to TCP throughput in some situations. If you want “normal” TCP behavior you should

You may also want to confirm that SACK is enabled: (working since FreeBSD 5.3):

MTU discovery is on by default in FreeBSD. If you wish to disable MTU discovery, you can toggle it with the sysctl variable:

Contributors: Pekka Savola and David Malone.
Checked for FreeBSD 7.0, Sept 2008

Tuning TCP for Linux 2.4 and 2.6

NB: Recent versions of Linux (version 2.6.17 and later) have full autotuning with 4 MB maximum buffer sizes. Except in some rare cases, manual tuning is unlikely to substantially improve the performance of these kernels over most network paths, and is not generally recommended

Since autotuning and large default buffer sizes were released progressively over a succession of different kernel versions, it is best to inspect and only adjust the tuning as needed. When you upgrade kernels, you may want to consider removing any local tuning.

All system parameters can be read or set by accessing special files in the /proc file system. E.g.:

If the parameter tcp_moderate_rcvbufThe sims university life free download mac. is present and has value 1 then autotuning is in effect. With autotuning, the receiver buffer size (and TCP window size) is dynamically updated (autotuned) for each connection. (Sender side autotuning has been present and unconditionally enabled for many years now).

The per connection memory space defaults are set with two 3 element arrays:

These are arrays of three values: minimum, initial and maximum buffer size. They are used to set the bounds on autotuning and balance memory usage while under memory stress. Note that these are controls on the actual memory usage (not just TCP window size) and include memory used by the socket data structures as well as memory wasted by short packets in large buffers. The maximum values have to be larger than the BDP of the path by some suitable overhead.

With autotuning, the middle value just determines the initial buffer size. It is best to set it to some optimal value for typical small flows. With autotuning, excessively large initial buffer waste memory and can even hurt performance.

If autotuning is not present (Linux 2.4 before 2.4.27 or Linux 2.6 before 2.6.7), you may want to get a newer kernel. Alternately, you can adjust the default socket buffer size for all TCP connections by setting the middle tcp_rmem value to the calculated BDP. This is NOT recommended for kernels with autotuning. Since the sending side is autotuned, this is never recommended for tcp_wmem.

The maximum buffer size that applications can request (the maximum acceptable values for SO_SNDBUF and SO_RCVBUF arguments to the setsockopt() system call) can be limited with /proc variables:

The kernel sets the actual memory limit to twice the requested value (effectively doubling rmem_max and wmem_max) to provide for sufficient memory overhead. You do not need to adjust these unless your are planing to use some form of application tuning.

NB: Manually adjusting socket buffer sizes with setsockopt() disables autotuning. Application that are optimized for other operating systems may implicitly defeat Linux autotuning.

The following values (which are the defaults for 2.6.17 with more than 1 GByte of memory) would be reasonable for all paths with a 4MB BDP or smaller (you must be root):

Do not adjust tcp_mem unless you know exactly what you are doing. This array (in units of pages) determines how the system balances the total network buffer space against all other LOWMEM memory usage. The three elements are initialized at boot time to appropriate fractions of the available system memory.

You do not need to adjust rmem_default or wmem_default (at least not for TCP tuning). These are the default buffer sizes for non-TCP sockets (e.g. unix domain and UDP sockets).

All standard advanced TCP features are on by default. You can check them by:

Linux supports both /proc and sysctl (using alternate forms of the variable names – e.g. net.core.rmem_max) for inspecting and adjusting network tuning parameters. The following is a useful shortcut for inspecting all tcp parameters:

For additional information on kernel variables, look at the documentation included with your kernel source, typically in some location such as /usr/src/linux-<version>/Documentation/networking/ip-sysctl.txt. There is a very good (but slightly out of date) tutorial on network sysctl’s at http://ipsysctl-tutorial.frozentux.net/ipsysctl-tutorial.html.

If you would like to have these changes to be preserved across reboots, you can add the tuning commands to your the file /etc/rc.d/rc.local .

Autotuning was prototyped under the Web100 project. Web100 also provides complete TCP instrumentation and some additional features to improve performance on paths with very large BDP.

Contributors: John Heffner and Matt Mathis

Checked for Linux 2.6.18, 12/5/2006

Tuning TCP for Mac OS X

Mac OS X has a single sysctl parameter, kern.ipc.maxsockbuf, to set the maximum combined buffer size for both sides of a TCP (or other) socket. In general, it can be set to at least twice the BDP. E.g:

The default send and receive buffer sizes can be set using the following sysctl variables:

If you would like these changes to be preserved across reboots you can edit /etc/sysctl.conf.

RFC1323 features are supported and on by default. SACK is present and enabled by defult in OS X version 10.4.6.

Although we have never tested it, there is a commercial product to tune TCP on Macintoshes. The URL is http://www.sustworks.com/products/prod_ottuner.html. I don’t endorse the product they are selling (since I’ve never tried it). However, it is available for a free trial, and they appear to do an excellent job of describing perf-tune issues for Macs.

Tested for 10.3, MBM 5/15/05

Procedure for raising network limits under Solaris

All system TCP parameters are set with the ‘ndd’ tool (man 1 ndd). Parameter values can be read with:

and set with:

RFC1323 timestamps, window scaling and RFC2018 SACK should be enabled by default. You can double check that these are correct:

Set the maximum (send or receive) TCP buffer size an application can request:

Set the maximum congestion window:

Set the default send and receive buffer sizes:

Contributors: John Heffner (PSC), Nicolas Williams (Sun Microsystems, Inc)

Checked for Solaris 10.?, 4/12/06

Procedure for raising network limits for Windows XP (and Windows 2000)

The easiest way to tune TCP under Windows XP (and many earlier versions of windows) is to get DrTCP from “DSL Reports” [download page]. Set the “Tcp receive window” to your computed BDP (e.g. 400000), turn on “Window Scaling” and “Selective Acks”. If you expect to use 90 Mb/s or faster, you should also turn on “Time Stamping”. You must restart for the changes to take effect.

If you need to get down in the details, you have to use the ‘regedit’ utility to read and set system parameters. If you are not familiar with regedit you may want to follow the step-by-step instructions [here].

BEWARE: Mistakes with regedit can have very serious consequences that are difficult to correct. You are strongly encouraged to backup the entire registry before you start (use the backup utility) and to export the TcpipParameter subtree to a file, so you can put things back if you need to (use “export” under regedit).

The primary TCP tuning parameters appear in the registry under HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters.

To enable high performance TCP you must turn on RFC1323 features (create REG_DWORD key “Tcp1323Opts” with value 3) and set the maximum TCP buffersize (create REG_DWORD key “GlobalMaxTcpWindowSize” with an appropriate value such as 4000000, decimal).

If you want to set the system wide default buffer size create REG_DWORD key “TcpWindowSize” with an appropriate value. This parameter can also be set per interface at HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParametersInterfaceinterfaceGUID, which may help to protect interactive applications that are using different interfaces from the effects of overbuffering.

For the most up to date detailed technical information, go to the Microsoft knowledge base (at support.microsoft.com) and search product “windows XP” for “TCP/IP performance tuning”.

Speedguide summarizes this material with an intermediate level of detail, however the target audience is for relatively low data rates.

There is also very good page on tuning Windows XP, by Carl Harris at Virginia Tech.

Contributors: Jim Miller (at PSC).
Checked for WindowsXP service pack 2, July 2006

Acknowledgments

Jamshid Mahdavi maintained this page for many years, both at PSC and later, remotely from Novell. We are greatly indebted to his vision and persistence in establishing this resource.

Thanks Jamshid!

Many, many people have helped us compile this information. We want to thank everyone who sent us updates, additions and corrections. We have decided to include attributions for all future contributors. (Sorry not to be able to give full credit where credit is due for past contributors.)

This material has been maintained as a sideline of many different projects, nearly all of which have been funded by the National Science Foundation. It was started under NSF-9415552, but also supported under Web100 (NSF-0083285) and the NPAD project (ANI-0334061).

Matt Mathis <mathis@psc.edu>; and Raghu Reddy <rreddy@psc.edu>;
(with help from many others, especially Jamshid Mahdavi)
$Id: index.php,v 1.21 2008/02/04 21:35:27 mathis Exp $