You are here

Fix Windows error "The name limit for the local computer network adapter card was exceeded"

This morning I wasn't greeted with happy faces at work. Some users could not log in, some got connection errors from Outlook and often some or all of their mapped drives were missing.

All this was happening on one of our (Citrix) terminal servers. Searching through the event logs I found warnings such as "The user 'X:' preference item in the 'Drive Mappings {F86B7E1E-0434-408F-AA1A-B616952AF5C2}' Group Policy object did not apply because it failed with error code '0x80070044 The name limit for the local computer network adapter card was exceeded.' This error was suppressed.".

Searching Google for the error code led me to Microsoft Article http://support.microsoft.com/kb/319504. Essentially the server was regularly failing to create network connections.

Running

netstat

on a command line showed me countless connections, more than I've ever seen on any system.

On closer inspection, most of them were in the TIME_WAIT state and running

netstat -ano

also revealed that most of these connections weren't even associated with a real Process ID:

Active Connections

  Proto  Local Address          Foreign Address        State           PID
...
  TCP    192.168.1.101:55389    192.168.1.30:1433      TIME_WAIT       0
  TCP    192.168.1.101:55390    192.168.1.30:1433      TIME_WAIT       0
  TCP    192.168.1.101:55391    192.168.1.30:1433      TIME_WAIT       0
  TCP    192.168.1.101:55392    192.168.1.30:1433      TIME_WAIT       0
  TCP    192.168.1.101:55393    192.168.1.30:1433      TIME_WAIT       0
...

This is very weird because TIME_WAIT connections should simply be closed after (by default) 240 seconds.

Some forums and KB articles (such as http://support.microsoft.com/kb/970406) suggest lowering this timeout with the TcpTimedWaitDelay registry parameter, but this requires a reboot and I didn't want to bother our users.

Another KB article (http://support.microsoft.com/kb/2553549) provides a solution for TIME_WAIT problems with servers running more than 497 days, but our terminal servers are rebooted weekly so this couldn't be related.

Running

netstat -ano | find /c "TIME_WAIT"

showed me that the number of TIME_WAIT connections was close to 16384. I always thought all ports above 1024 were available for outgoing connections so I expected a system would only get into trouble with a number close to 64512...

Reading http://en.wikipedia.org/wiki/Ephemeral_port clarified that IANA recommends port range 49152 to 65535, which is also used by Microsoft since Server 2008. This led me to article http://support.microsoft.com/kb/929851 which shows how to view the port range:

C:\>netsh int ipv4 show dynamicport tcp

Protocol tcp Dynamic Port Range
---------------------------------
Start Port      : 49152
Number of Ports : 16384

Even better, it shows how to change the range on a running system, so I increased it to the maximum:

netsh int ipv4 set dynamicport tcp start=1025 num=64510

Best of all, after checking the connections with netstat, all TIME_WAIT connections with PID 0 had magically disappeared!

This took the pressure of the server without a reboot; happy users!

I haven't figured out yet which program was creating those countless (MS-SQL) connections and why Windows wasn't closing them (or closing them fast enough), but at least I have a quick-fix now.

I'm also unsure why a larger port range is not the default. The Microsoft article above actually states that an Exchange server uses range 1025 to 60000. Linux uses 32768 to 61000 by default. I guess it's just some here be dragons fear...

Comments

Thank you so much for this post, I was completely stuck until I found this.

I was hoping you could help me, did you rum this on the server or client computers. Im having the same issue bit its happening to my workstation not servers at least 4 clients per night when there idle. when I get I have to reboot there computers so they can hit there network shares.

You are a life saver...

perrfecto!

Only thing I could find that actually worked. Maybe you should share with M$
Thanks!

Thank you so much dude, We followed it and fixed this prolonged issue in a critical project.