Enable HTTP Keep-Alives
1 One way to help your IIS server perform optimally is to make sure that HTTP Keep-Alives are enabled. Although IIS 6.0 enables Keep-Alives by default, this setting makes such a big difference that it’s worth it to make sure that they haven’t been turned off.
The idea behind Keep-Alives is simple. As you probably know, most Web pages are made up of multiple elements. For example, a page might consist of an HTML document and multiple images. In order to display a Web page, a Web browser must download all of the page’s various elements to a local cache. This means that most of the time, a browser must download multiple files in order to display a single Web page.
Normally, when a Web browser downloads a file, it must open a connection to the Web server, download the file, and then close the connection. Since a Web page is usually made up of multiple files, this is very inefficient. There is absolutely no reason to close the connection if the Web browser is just going to have to open it again in order to download the next file. If the HTTP Keep-Alives option is enabled, then IIS holds the connection open so that a Web browser can download multiple files without having to open and close the connection each time.
How to do it: To enable the HTTP Keep-Alives option, open the IIS Manager and navigate through the console tree to Internet Information Services |
| Web Sites | your Web site. Right-click on your Web site and select the Properties command from the context menu. When you do, you will see the site’s Properties sheet (see Figure 1). The Enable HTTP Keep-Alives checkbox is located on the Web Site tab.
Figure 1 Enable Keep-Alives
Adjust Connection Timeouts
2 While IIS can hold an HTTP connection open while a client downloads multiple files, you don’t want to hold the connection open indefinitely. By default, IIS 6.0 is configured to terminate a connection after 120 seconds of inactivity. This is a shorter period than was used in previous versions of IIS for a couple of reasons. First, IIS uses about 10KB of memory for each connection just to keep track of the connection. Terminating idle connections frees up memory. Second, having a short timeout period reduces the potential for denial of service attacks.
At the same time though, a 120-second idle period may or may not be optimal for your organization. Shorter timeout periods usually increase the server’s performance, but performance will degrade if a client’s connection is terminated prematurely.
The only way to determine the optimal timeout period for your server is to use the Performance Monitor to track the Current Connections, Maximum Connections, and Total Connection Attempts counters associated with the Web service performance object. Watch these counters until you have a good idea of the normal values for your organization. Then try incrementally lowering the timeout value, and watching the counters for a few days to see how they are affected. The idea is that you want to find the point at which the Current Connections and Maximum Connections counters reach their lowest average value without driving up the total connection attempts.
How to do it: You can find the timeout value on the Web Site tab of a site’s Properties sheet just below the Enable HTTP Keep-Alives checkbox. Change the value to the value you want to try.
Enable HTTP Compression
3 Another way to improve performance is to compress the pages that IIS is serving. Of course, compression can be a trade-off because it conserves bandwidth but consumes CPU time and disk space.
The trick to using compression effectively is to understand that in IIS not all files are created equal. For example, suppose that you are hosting a Web site made up mostly of static HTML pages. Compressing static pages requires a minimal effort on the part of the server because the pages can be compressed and then cached. The next time someone requests the page, IIS doesn’t have to compress it again; it can just pull the already compressed page from the cache. I recommend always compressing static pages unless your server is low on CPU resources or disk space. Dynamic pages, on the other hand, can’t really be cached. This means that IIS has to compress the dynamic pages each time that they are requested. If a site gets a lot of traffic, this can mean a lot of extra work for the server.
Just because compressed copies of dynamic pages can’t be cached does not mean that you shouldn’t compress dynamic pages. Dynamic page compression certainly does have its place. If you have a server that has a lightly used CPU, but is low on bandwidth, then it is a perfect candidate for HTTP compression.
You can compress both static and dynamic pages. In either case, you will need to verify that the temporary files are being stored in a suitable location. By default, temporary files will consume up to 95MB of space in the %windir%\IIS Temporary Compressed Files folder. When the 95MB limit has been reached, the older files will be overwritten by newer files.
How to do it: To enable HTTP compression, right-click on the Internet Information Services Manager’s Web Sites container and select the Properties command from the resulting context menu. When the Web Sites Properties sheet appears, select the Service tab. You can compress static files by selecting the Compress static files checkbox (which enables compression on htm, html, and txt files by default). Likewise, you can compress dynamic pages by selecting the Compress application files checkbox shown in Figure 2 (which enables compression on .asp, .dll, and .exe files by default). You’ll also see the option of changing the amount of space consumed by temporary files.
Figure 2 Compress Application Files
You can modify the list of file extensions IIS will include in compression. To add one or more file types to the server-wide static compression configuration, open a command prompt and execute the following commands:
"Newext" is the extension of the new file type you want to compress. You can add multiple file types separated by spaces.
c:\inetpub\adminscripts>cscript adsutil.vbs SET W3SVC/Filters/Compression/Deflate/ HcFileExtensions "htm html txt newext" c:\inetpub\adminscripts>cscript adsutil.vbs SET W3SVC/Filters/Compression/gzip/HcFileExtensions "htm html txt newext"
To remove one or more file types from the server-wide static compression configuration, repeat the previous two commands, leaving out the file type you want to remove.
If you want to add one or more file types to the server-wide dynamic compression configuration, open a command prompt and type the two commands that are shown here:
Once again, "newext" is the extension of the new file type you want to compress. You can add multiple file types separated by spaces. You will have to restart IIS before compression will take effect.
c:\inetpub\adminscripts>cscript adsutil.vbs SET W3SVC/Filters/Compression/Deflate/ HcScriptFileExtensions "asp dll exe newext" c:\inetpub\adminscripts>cscript adsutil.vbs SET W3SVC/Filters/Compression/gzip/ HcScriptFileExtensions "asp dll exe newext"
Grow a Web Garden
4 One way that you can increase an application pool’s performance, especially if your infrastructure includes general latency with back-end data sources, is by assigning multiple worker processes to it. The result is called a Web garden. There are a couple of benefits to Web gardens. First, they reduce resource contention. Second, if a Web application causes a worker process to hang (such as caused by a script in an infinite loop), then the other worker processes can keep servicing requests.
How to do it: You can set the number of worker processes by right-clicking on an application pool and selecting the Properties command from the resulting context menu. The Web garden setting is found on the resulting Properties sheet’s Performance tab shown in Figure 3.
Figure 3 AppPool Properties
Adjust the IIS Object Cache TTL
5 IIS caches any objects that have been requested. Each object within the cache has a time to live (TTL) value associated with it. By default, the TTL value is set at 30 seconds. This means that if an object in the cache hasn’t been used in the last 30 seconds, it is removed. A 30-second TTL may not always be appropriate, however. For example, if all of the pages on your site are dynamic, then IIS doesn’t really benefit much from caching, so you could free up some memory by shortening the TTL. Likewise, if the server is short on memory, then shortening the TTL is a good way to reclaim some memory for other functions. On the other hand, if your server has plenty of free memory and most of the pages are static, then you might be able to improve efficiency by increasing the TTL.
Unfortunately, the only way of adjusting the TTL is by editing the registry. Of course if you make a serious mistake editing the registry, you can cause Windows® or your applications to fail, so make sure you have a full system backup before continuing.
How to do it: Open the Registry Editor and navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\InetInfo\Parameters. Now, right-click on the Parameters subkey and select the New | DWORD Value commands from the resulting shortcut menus. In the New Value dialog box, enter ObjectCacheTTL. Now, right-click on the value that you just created and select the Modify command. Set the base to Decimal and then enter the new TTL time (in seconds) into the Value Data box. You can set the TTL between 0 (no caching) and 4,294,967,295 (the maximum possible time limit, which indicates unlimited caching).
6 We’ve all seen examples of poorly written code that causes memory leaks. What some people don’t realize is that Web sites can contain leaky code, too. Leaky code can really impact IIS performance over time as more and more memory drains away. One way of limiting the effects of a memory leak is to recycle the worker processes and memory at the application-pool level.
IIS recycles the worker process every 299 hours by default, but you can gain tighter control over this. You can schedule the recycle process to occur at a certain interval, a specific time of day, or after a certain number of requests. You can also configure memory to be recycled once a certain threshold has been reached.
[Editor's Update - 11/1/2005:IIS recycles the worker process every 29 hours by default.]
How to do it: You can access the interface used to control recycling by right-clicking on the application pool that contains the leaky Web application, and selecting the properties command from the resulting shortcut menu. The recycling options exist on the Recycling tab of the AppPool’s Properties sheet, as you see in Figure 4.
Figure 4 Recycling Worker Processes
Limit Queue Length
7 Queue length seems to be a touchy subject among IIS administrators, but in my opinion, it’s sometimes better to just let some visitors to your site leave rather than keep them waiting. When IIS receives a request, the request is placed into a queue and is then serviced. If requests come in more quickly than the server can service them, then the queue length grows because requests are added to the queue faster than they are being serviced. This in itself isn’t necessarily a bad thing. It’s normal for Web sites to get traffic in spurts. For example, a site might receive ten simultaneous requests one minute and only get two requests the next minute. Having a queue structure in place ensures that no requests are lost and that all of the requests are eventually serviced.
There comes a point when enough is enough though. If you’ve got 10,000 items in the queue and new requests continue to pour in, then the chances of IIS catching up with the requests any time soon are pretty slim. In these situations, it’s usually better to put a stop to new requests until the server has a chance to catch up. Imagine if there was no limit to the server’s queue length. Someone who wanted to launch a denial of service attack could flood the server with requests until the hard disk filled up. Limiting the queue length also ensures that anyone whose request is queued will be serviced within a reasonable amount of time. Everybody else will receive a notice indicating that the server is busy. Request queues service an entire application pool rather than an individual Web site.
How to do it: To change the request queue length, open the Internet Information Services Manager, right-click on the application pool that you want to adjust the queue length for, and select the Properties command from the resulting shortcut menu. When you do, you will see the AppPool’s Properties sheet that you saw in Figure 3.
The Request Queue Limit option is found on the Properties sheet’s Performance tab. By default, the queue limit is set at 1,000, but you can adjust the queue length to meet your needs.
Shift Priority to the Working Set
8 Servers running Windows Server™ 2003 are configured by default to give preference to the file system cache over the working set when allocating memory. Microsoft does this because Windows benefits from having a large file system cache. Being that IIS rides on top of the Windows operating system, it also benefits from having a large file system cache. If your server is a dedicated IIS Server, though, you might see better performance if you shift priority to the working set instead. The reason behind this is if preference is given to the file system cache, the pageable code is often written to virtual memory. The next time this information is needed, something else must be paged to virtual memory and the previously paged information must be read into physical memory before it can be used. This results in very slow processing.
How to do it: To shift the machine’s priority to the working set, open the Control Panel and choose the System option. When the System Properties sheet appears, select the Advanced tab and then click the Settings button found in the Performance section. This will cause Windows to display the Performance Options Properties sheet. Select the Advanced tab and then select the Programs option in the Memory usage section, as shown in Figure 5.
Figure 5 Performance Options
9 The addition of physical memory is one of the best performance enhancements you can make. To optimize available memory, data is moved back and forth between RAM and disk-based virtual memory in a process known as paging. The more RAM the server machine has, the less paging will occur, and that’s good because paging is extremely inefficient and causes the machine to run much slower than it would if everything could fit into RAM. A little paging is normal, even on machines with plenty of RAM, but excessive paging will kill a machine’s performance. Not only does the machine have to stop and wait for the paging operation to complete, but the paging operation itself is processor intensive.
In addition to adding memory you can create page files on multiple hard disks. Windows requires your page file to be 1.5 times the size of your system’s RAM. However, not all of that space has to exist within a single file. For example, if you have a server with four hard drives and 1GB of RAM, then placing a 384MB page file on each of the four drives would usually be more efficient than placing a 1.5GB page file on a single disk.
Use Disk Striping
10 Although disk striping isn’t technically an IIS performance tweak, it can go a long way toward helping IIS to be more efficient. Disk striping spans files across multiple hard drives in an effort to achieve the combined performance of multiple drives. For example, if a volume is striped across five physical hard drives, then data can be read and written at approximately five times the speed that it could be if it existed on a single drive, because all five drives are being used simultaneously.
In the real world, you won’t get five times the performance by using five drives, because you lose some performance to the overhead of managing multiple drives. Some stripe sets also sacrifice some speed and capacity in order to achieve a degree of fault tolerance.
While including disk striping in your IIS architecture and design can help you achieve performance gains, unfortunately the concept of disk striping and other advanced disk configurations is a bit more complex than .