Performance Tuning Windows 2012: Network Subsystem Part 2
In our previous article we discussed the hardware supported features of some of the high-end network adapters. Let’s take a look how you can use some of those settings to their best advantage. Remember that the correct settings depend on the network adapter, your workload, the resources of the host computer, and of course your performance goals.
Enabling Offload Features
Turning on network adapter offload features typically will benefit performance.However, sometimes the network adapter might not be powerful enough to handle the offload capabilities with high throughput. For example, enabling segmentation offload can reduce the maximum sustainable throughput on some network adapters because of limited hardware resources. However, if the reduced throughput is not expected to be a limitation, you should enable offload capabilities, even for those network adapters. Some network adapters require offload features to be independently enabled for send and receive paths.
Enabling RSS for Web Scenarios
RSS can improve web scalability and performance if you have fewer network adapters than logical processors in your server. When all the web traffic is going through the RSS-capable network adapters, incoming web requests from different connections can be simultaneously processed across different CPUs. Because of the logic in RSS and HTTP for load distribution, performance can be degraded if a non-RSS-capable network adapter accepts web traffic on a server that has one or more RSS-capable network adapters. It is recommended that you use RSS-capable network adapters or disable RSS from the Advanced Properties tab. To determine whether a network adapter is RSS-capable, view the RSS information on the Advanced Properties tab for the device.
RSS Profiles and RSS Queues
RSS Profiles are new in Windows Server 2012. The default profile is NUMA Static, which changes the default behavior from previous versions of Windows. We suggest reviewing the available profiles and understanding when they are beneficial. For example, you can use Task Manager to examine if your logical processors is underutilized for receive traffic, and try increasing the number of RSS queues from the default of 2 to the maximum that your network adapter supports. Changing the number of RSS queues is an option your network adapter may have as part of the driver.
Increasing Network Adapter Resources
For network adapters that allow manual configuration of resources, such as receive and send buffers, you should increase the allocated resources. Some network adapters set their receive buffers low to conserve allocated memory from the host. The low value results in dropped packets and decreased performance. Therefore, for receive-intensive scenarios, it is recommended to increase the receive buffer value to the maximum. If your adapter does not supports or exposes manual configuration, it most likely dynamically configures the resources, or it might be a fixed value that you can’t change.
To control interrupt moderation, some network adapters expose different interrupt moderation levels, buffer coalescing parameters (sometimes separately for send and receive buffers), or both. You should consider interrupt moderation for CPU-bound workloads, and consider the trade-off between the host CPU savings and latency versus the increased host CPU savings because of more interrupts and less latency. If the network adapter does not perform interrupt moderation, but it does expose buffer coalescing, increasing the number of coalesced buffers allows more buffers per send or receive, which improves performance.
Your network adapter has a number of options to optimize latency caused by the operating system. This latency is the time between the network driver processing an incoming packet and the sending it back, and is usually measured in microseconds. As a comparison, the transmission time for packets transmitted over long distances is usually expressed in milliseconds. This tuning will not reduce the time a packet spends in transit.
Some tuning suggestions for microsecond-sensitive networks include:
· Set the computer BIOS to High Performance, with C-states disabled. However, this is system and BIOS dependent. Some systems provide higher performance if the operating system controls power management. You can check and adjust your power management settings from the Control Panel or by using the powercfg command.
· Set the operating system power management profile to High Performance System. For this to work as expected, your system’s BIOS has to be set to enable operating system control of power management.
· Enable Static Offloads, for example, UDP Checksums, TCP Checksums, and Send Large Offload (LSO)
· Enable RSS if the traffic is multi-streamed, such as high-volume multicast receive
· Disable the Interrupt Moderation setting for network drivers that require the lowest possible latency. The tradeoff is that this can use more CPU time.
· Handle network adapter interrupts and DPCs on a core processor that shares CPU cache with the core that is being used by the program that is handling the packet. CPU affinity tuning can be used to direct a process to certain logical processors in conjunction with RSS configuration to accomplish this. Using the same core for the interrupt, DPC, and user mode thread exhibits worse performance as load increases because the ISR, DPC, and thread contend for the use of the core.
System Management Interrupts
Many hardware systems use System Management Interrupts (SMI) for a variety of maintenance functions, including reporting of ECC memory errors, legacy USB compatibility, fan control, and BIOS controlled power management. The SMI is the highest priority interrupt on the system and places the CPU in a management mode, which preempts all other activity while it runs an interrupt service routine, typically contained in BIOS.
This behavior can result in latency spikes of 100 microseconds or more. If you need to achieve the lowest latency, look for a BIOS version from your hardware provider that reduces SMIs to the lowest degree possible, referred to as “low latency BIOS” or “SMI free BIOS.” It is not possible to eliminate SMI activity altogether because it is used to control some essential functions, such as fan control.
Tuning TCP
TCP Receive Window Auto-Tuning
Prior to Windows Server 2008, the network stack used a fixed-size receive-side window that limited the overall potential throughput for connections. One of the most significant changes to the TCP stack is TCP receive window auto-tuning. You can calculate the total throughput of a single connection when you use this fixed size default as:
Total achievable throughput in bytes = TCP window * (1 / connection latency)
As an example, the achievable throughput is only 51 Mbps on a 1 GB connection with 10ms latency. With auto-tuning, the receive-side window is adjustable, and it can grow to meet the demands of the sender. It is possible for a connection to achieve a full line rate of a 1 GB connection. Network usage requirements that might have been limited in the past by the total achievable throughput of TCP connections can now fully use the network.
Windows Filtering Platform
The Windows Filtering Platform (WFP) that was introduced in Windows Vista and Windows Server 2008 provides APIs to non-Microsoft independent software vendors (ISVs) to create packet processing filters. Examples include firewall and antivirus software. Be careful that a poorly written WFP filter can significantly decrease networking performance.
The following registry keywords from Windows Server 2003 are no longer supported, and they are ignored in Windows Server 2012, as well as Windows Server 2008 R2, and Windows Server 2008:
· TcpWindowSize – HKLM\System\CurrentControlSet\Services\Tcpip\Parameters
· NumTcbTablePartitions – HKLM\system\CurrentControlSet\Services\Tcpip\Parameters
· MaxHashTableSize – HKLM\system\CurrentControlSet\Services\Tcpip\Parameters
Network-Related Performance Counters
This section lists the counters that are relevant to managing network performance.
Resource Utilization
· IPv4, IPv6
- Datagrams Received/sec
- Datagrams Sent/sec
· TCPv4, TCPv6
- Segments Received/sec
- Segments Sent/sec
- Segments Retransmitted/sec
· Network Interface(*), Network Adapter(*)
- Bytes Received/sec
- Bytes Sent/sec
- Packets Received/sec
- Packets Sent/sec
- Output Queue Length
This counter is the length of the output packet queue (in packets). If this is longer than 2, delays occur. You should find the bottleneck and eliminate it if you can. Because NDIS queues the requests, this length should always be 0.
· Processor Information
- % Processor Time
- Interrupts/sec
- DPCs Queued/sec
This counter is an average rate at which DPCs were added to the logical processor’s DPC queue. Each logical processor has its own DPC queue. This counter measures the rate at which DPCs are added to the queue, not the number of DPCs in the queue. It displays the difference between the values that were observed in the last two samples, divided by the duration of the sample interval.
Potential Network Problems
· Network Interface(*), Network Adapter(*)
- Packets Received Discarded
- Packets Received Errors
- Packets Outbound Discarded
- Packets Outbound Errors
· WFPv4, WFPv6
- Packets Discarded/sec
· UDPv4, UDPv6
- Datagrams Received Errors
· TCPv4, TCPv6
- Connection Failures
- Connections Reset
· Network QoS Policy
- Packets dropped
- Packets dropped/sec
· Per Processor Network Interface Card Activity
- Low Resource Receive Indications/sec
- Low Resource Received Packets/sec
· Microsoft Winsock BSP
- Dropped Datagrams
- Dropped Datagrams/sec
- Rejected Connections
- Rejected Connections/sec
Receive Side Coalescing (RSC) performance
· Network Adapter(*)
- TCP Active RSC Connections
- TCP RSC Average Packet Size
- TCP RSC Coalesced Packets/sec
- TCP RSC Exceptions/sec