| title | Advanced network options in Windows |
|---|---|
| description | Advanced networking for Windows containers. |
| author | jmesser81 |
| ms.author | jgerend |
| ms.date | 10/20/2021 |
| ms.topic | how-to |
| ms.assetid | 538871ba-d02e-47d3-a3bf-25cda4a40965 |
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016
Several network driver options are supported to take advantage of Windows-specific capabilities and features.
Applies to all network drivers
You can take advantage of Switch Embedded Teaming when creating container host networks for use by Docker by specifying multiple network adapters (separated by commas) with the -o com.docker.network.windowsshim.interface option.
C:\> docker network create -d transparent -o com.docker.network.windowsshim.interface="Ethernet 2", "Ethernet 3" TeamedNet
Applies to transparent and l2bridge network drivers
To set a VLAN ID for a network, use the option, -o com.docker.network.windowsshim.vlanid=<VLAN ID> to the docker network create command. For instance, you might use the following command to create a transparent network with a VLAN ID of 11:
C:\> docker network create -d transparent -o com.docker.network.windowsshim.vlanid=11 MyTransparentNetwork
When you set the VLAN ID for a network, you are setting VLAN isolation for any container endpoints that will be attached to that network.
Ensure that your host network adapter (physical) is in trunk mode to enable all tagged traffic to be processed by the vSwitch with the vNIC (container endpoint) port in access mode on the correct VLAN.
Applies to l2bridge networks
Ordinarily, when you create a l2bridge container network using docker network create, container endpoints do not have an HNS OutboundNAT policy applied, resulting in containers being unable to reach the outside world. If you are creating a network, you can use the -o com.docker.network.windowsshim.enable_outboundnat=<true|false> option to apply the OutboundNAT HNS policy to give containers access to the outside world:
C:\> docker network create -d l2bridge -o com.docker.network.windowsshim.enable_outboundnat=true MyL2BridgeNetwork
If there is a set of destinations (e.g. container to container connectivity is needed) for where we don't want NAT'ing to occur, we also need to specify an ExceptionList:
C:\> docker network create -d l2bridge -o com.docker.network.windowsshim.enable_outboundnat=true -o com.docker.network.windowsshim.outboundnat_exceptions=10.244.10.0/24
Applies to all network drivers
Ordinarily, when you create a container network using docker network create, the network name that you provide is used by the Docker service but not by the HNS service. If you are creating a network, you can specify the name that it is given by the HNS service using the option, -o com.docker.network.windowsshim.networkname=<network name> to the docker network create command. For instance, you might use the following command to create a transparent network with a name that is specified to the HNS service:
C:\> docker network create -d transparent -o com.docker.network.windowsshim.networkname=MyTransparentNetwork MyTransparentNetwork
Applies to all network drivers except 'nat'
To bind a network (attached through the Hyper-V virtual switch) to a specific network interface, use the option, -o com.docker.network.windowsshim.interface=<Interface> to the docker network create command. For instance, you might use the following command to create a transparent network which is attached to the "Ethernet 2" network interface:
C:\> docker network create -d transparent -o com.docker.network.windowsshim.interface="Ethernet 2" TransparentNet2
Note: The value for com.docker.network.windowsshim.interface is the network adapter's Name, which can be found with:
PS C:\> Get-NetAdapter
Applies to all network drivers
Use the option, -o com.docker.network.windowsshim.dnssuffix=<DNS SUFFIX> to specify the DNS suffix of a network, and the option, -o com.docker.network.windowsshim.dnsservers=<DNS SERVER/S> to specify the DNS servers of a network. For example, you might use the following command to set the DNS suffix of a network to "example.com" and the DNS servers of a network to 4.4.4.4 and 8.8.8.8:
C:\> docker network create -d transparent -o com.docker.network.windowsshim.dnssuffix=abc.com -o com.docker.network.windowsshim.dnsservers=4.4.4.4,8.8.8.8 MyTransparentNetwork
Applies to all network drivers
When testing DNS connectivity you should use the Resolve-DnsName powershell cmdlet instead of nslookup. Resolve-DnsName uses the built-in DNS client for Windows which will retry with all of the available DNS servers and unlike nslookup also works with configurations that use newer DNS technologies such as DNSSEC, DNS-over-HTTP (DoH) and DNS-over-TLS (DoT). This is the best way to determine if DNS is functioning within the container. Any application that relies on Windows to do the DNS lookup will get the same behavior as Resolve-DNSName.
PS C:\> resolve-dnsname bing.com
Name Type TTL Section IPAddress
---- ---- --- ------- ---------
bing.com AAAA 882 Answer 2620:1ec:c11::200
bing.com A 2442 Answer 13.107.21.200
bing.com A 2442 Answer 204.79.197.200
If after testing with resolve-dnsname you suspect you are having DNS connectivity issues you might be having DNS connection issues outside of the container host. To narrow that down, use pktmon to trace the packet to see if it leaves the container host in the correct format. In this exmaple, 8.8.8.8 is blocked somewhere on the network. Use the following commands to determine whether or not it is an issue with the Windows Container host :
On the container host:
PS C:\> pktmon filter remove
PS C:\> pktmon filter add -t tcp -p 53
PS C:\> pktmon filter add -t udp -p 53
PS C:\> pktmon start --capture
In the container:
PS C:\> nslookup bing.com 8.8.8.8
DNS request timed out.
timeout was 2 seconds.
Server: UnKnown
Address: 8.8.8.8
DNS request timed out.
timeout was 2 seconds.
DNS request timed out.
timeout was 2 seconds.
DNS request timed out.
timeout was 2 seconds.
DNS request timed out.
timeout was 2 seconds.
*** Request to UnKnown timed-out
Back on the container host:
PS C:\> pktmon stop
PS C:\> pktmon etl2txt PktMon.etl
PS C:\> notepad pktmon.txt
Using notepad look at the Appearance # to find the last appearance of the DNS request packet, check which component it was last seen on, and confirm that the packet looks correct:
[02]0000.0000::2024-03-20 14:13:45.720095500 [Microsoft-Windows-PktMon] PktGroupId 281474976710677, PktNumber 1, Appearance 14, Direction Tx , Type Ethernet , Component 6, Edge 1, Filter 2, OriginalSize 80, LoggedSize 80
00-15-5D-C8-8E-16 > E8-B5-D0-2C-24-40, ethertype IPv4 (0x0800), length 80: 10.127.130.152.59312 > 8.8.8.8.53: 1+ PTR? 8.8.8.8.in-addr.arpa. (38)
You can find the list of component ids and names at the end of the file. In this example Component 6 is the ethernet adapter:
[00]1D7C.0AFC::2024-03-20 14:13:59.067489700 [Microsoft-Windows-PktMon] Component 6, Type Miniport , Name netvsc.sys, Microsoft Hyper-V Network Adapter #2
[00]1D7C.0AFC::2024-03-20 14:13:59.067489900 [Microsoft-Windows-PktMon] Property: Component 6, PhysAddress = 0x00155DC88E16
[00]1D7C.0AFC::2024-03-20 14:13:59.067490200 [Microsoft-Windows-PktMon] Property: Component 6, NdisMedium = Ethernet
Whenever the last appearance is the ethernet adapter you can assume the packet left he machine. You should also verify that the IP addresses are correct and that the destination MAC address matches the physical ethernet switch. If you don't see a response in the pktmon log then you know that the container host never received the reply from the network.
See this article for more information.
Here's a list of handy tips and insights, inspired by common questions on Windows container networking that we hear from the community...
As part of KB4015217 HNS requires that IPv6 is enabled on Windows container hosts. If you're running into an error such as the one below, there's a chance that IPv6 is disabled on your host machine.
docker: Error response from daemon: container e15d99c06e312302f4d23747f2dfda4b11b92d488e8c5b53ab5e4331fd80636d encountered an error during CreateContainer: failure in a Windows system call: Element not found.
We're working on platform changes to automatically detect/prevent this issue. Currently the following workaround can be used to ensure IPv6 is enabled on your host machine:
C:\> reg delete HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters /v DisabledComponents /f
NEW: We are working to make it possible to run Linux and Windows containers side-by-side without the Moby Linux VM. See this blog post about Linux Containers on Windows (LCOW) for details. Here is how to get started.
NOTE: LCOW is deprecating the Moby Linux VM, and it will utilize the default HNS "nat" internal vSwitch.
Moby Linux VMs use DockerNAT switch with Docker for Windows (a product of Docker CE)
Docker for Windows (the Windows driver for the Docker CE engine) on Windows 10 will use an internal vSwitch named 'DockerNAT' to connect Moby Linux VMs to the container host. Developers using Moby Linux VMs on Windows should be aware that their hosts are using the DockerNAT vSwitch rather than the "nat" vSwitch that is created by the HNS service (which is the default switch used for Windows containers).
If the container host is virtualized, and you wish to use DHCP for IP assignment, you must enable MACAddressSpoofing on the virtual machine's network adapter. Otherwise, the Hyper-V host will block network traffic from the containers in the VM with multiple MAC addresses. You can enable MACAddressSpoofing with this PowerShell command:
PS C:\> Get-VMNetworkAdapter -VMName ContainerHostVM | Set-VMNetworkAdapter -MacAddressSpoofing On
If you are running VMware as your hypervisor, you will need to enable promiscuous mode for this to work. Details can be found here
If you wish to create more than one transparent network you must specify to which (virtual) network adapter the external Hyper-V Virtual Switch should bind. To specify the interface for a network, use the following syntax:
# General syntax:
C:\> docker network create -d transparent -o com.docker.network.windowsshim.interface=<INTERFACE NAME> <NETWORK NAME>
# Example:
C:\> docker network create -d transparent -o com.docker.network.windowsshim.interface="Ethernet 2" myTransparent2
When using static IP assignment, you must first ensure that the --subnet and --gateway parameters are specified when the network is created. The subnet and gateway IP address should be the same as the network settings for the container host - i.e. the physical network. For example, here's how you might create a transparent network then run an endpoint on that network using static IP assignment:
# Example: Create a transparent network using static IP assignment
# A network create command for a transparent container network corresponding to the physical network with IP prefix 10.123.174.0/23
C:\> docker network create -d transparent --subnet=10.123.174.0/23 --gateway=10.123.174.1 MyTransparentNet
# Run a container attached to MyTransparentNet
C:\> docker run -it --network=MyTransparentNet --ip=10.123.174.105 windowsservercore cmd
Only static IP assignment is supported with container networks created using the l2bridge driver. As stated above, remember to use the --subnet and --gateway parameters to create a network that's configured for static IP assignment.
Note that if multiple networks which use an external vSwitch for connectivity (e.g. Transparent, L2 Bridge, L2 Transparent) are created on the same container host, each of them requires its own network adapter.
Static IP assignment is performed directly on the container's network adapter and must only be performed when the container is in a STOPPED state. "Hot-add" of container network adapters or changes to the network stack is not supported (in Windows Server 2016) while the container is running.
If you encounter an error in creating a transparent network, it is possible that there is an external vSwitch on your system which was not automatically discovered by Docker and is therefore preventing the transparent network from being bound to your container host's external network adapter.
When creating a transparent network, Docker creates an external vSwitch for the network then tries to bind the switch to an (external) network adapter - the adapter could be a VM Network Adapter or the physical network adapter. If a vSwitch has already been created on the container host, and it is visible to Docker, the Windows Docker engine will use that switch instead of creating a new one. However, if the vSwitch which was created out-of-band (i.e. created on the container host using HYper-V Manager or PowerShell) and is not yet visible to Docker, the Windows Docker engine will try create a new vSwitch and then be unable to connect the new switch to the container host external network adapter (because the network adapter will already be connected to the switch that was created out-of-band).
For example, this issue would arise if you were to first create a new vSwitch on your host while the Docker service was running, then try to create a transparent network. In this case, Docker would not recognize the switch that you created and it would create a new vSwitch for the transparent network.
There are three approaches for solving this issue:
- You can of course delete the vSwitch that was created out-of-band, which will allow docker to create a new vSwitch and connect it to the host network adapter without issue. Before choosing this approach, ensure that your out-of-band vSwitch is not being used by other services (e.g. Hyper-V).
- Alternatively, if you decide to use an external vSwitch that was created out-of-band, restart the Docker and HNS services to make the switch visible to Docker.
PS C:\> restart-service hns
PS C:\> restart-service docker
- Another option is to use the '-o com.docker.network.windowsshim.interface' option to bind the transparent network's external vSwitch to a specific network adapter which is not already in use on the container host (i.e. a network adapter other than the one being used by the vSwitch that was created out-of-band). The '-o' option is described further in the Creating multiple transparent networks on a single container host section of this document.
Although we continue to add new features and drive development, some of these features will not be back-ported to older platforms. Instead, the best plan of action is to "get on the train" for latest updates to Windows 10 and Windows Server. The section below lists some work-arounds and caveats which apply to Windows Server 2016 and older versions of Windows 10 (i.e. before 1704 Creators Update)
The partitions for any new NAT networks must be created under the larger internal NAT networking prefix. The prefix can be found by running the following command from PowerShell and referencing the "InternalIPInterfaceAddressPrefix" field.
PS C:\> Get-NetNAT
For example, the host's NAT network internal prefix might be, 172.16.0.0/16. In this case, Docker can be used to create additional NAT networks as long as they are a subset of the 172.16.0.0/16 prefix. For example, two NAT networks could be created with the IP prefixes 172.16.1.0/24 (gateway, 172.16.1.1) and 172.16.2.0/24 (gateway, 172.16.2.1).
C:\> docker network create -d nat --subnet=172.16.1.0/24 --gateway=172.16.1.1 CustomNat1
C:\> docker network create -d nat --subnet=172.16.2.0/24 --gateway=172.16.1.1 CustomNat2
The newly created networks can be listed using:
C:\> docker network ls
Docker Compose can be used to define and configure container networks alongside the containers/services that will be using those networks. The Compose 'networks' key is used as the top-level key in defining the networks to which containers will be connected. For example, the syntax below defines the preexisting NAT network created by Docker to be the 'default' network for all containers/services defined in a given Compose file.
networks:
default:
external:
name: "nat"
Similarly, the following syntax can be used to define a custom NAT network.
Note: The 'custom NAT network' defined in the below example is defined as a partition of the container host's pre-existing NAT internal prefix. See the above section, 'Multiple NAT Networks,' for more context.
networks:
default:
driver: nat
ipam:
driver: default
config:
- subnet: 172.16.3.0/24
For further information on defining/configuring container networks using Docker Compose, refer to the Compose File reference.