Linux based (kernel) network bonding doesn't seem to be supported by vmware-server.
Linux host using Bonding (I desire mode 6, adaptive load balancing) with 3 gigabit nics.
The bond0 adapter functions as expected (tested prior to vmware installation), I install vmware-server and assign a bridge to bond0. dmesg shows:
/dev/vmnet: open called by PID 3203 (vmnet-bridge)
/dev/vmnet: hub 0 does not exist, allocating memory.
/dev/vmnet: port on hub 0 successfully opened
bridge-bond0: enabling the bridge
bridge-bond0: up
bridge-bond0: already up
bridge-bond0: attached
/dev/vmnet: open called by PID 3763 (vmware-vmx)
device eth0 entered promiscuous mode
device bond0 entered promiscuous mode
bridge-bond0: enabled promiscuous mode
/dev/vmnet: port on hub 0 successfully opened
Yet, none of the vm's can send or receive data on the network. So I assume vmware is only listening on the primary interface in the bond. Switching to bonding mode 5 confirms this. With bonding mode 5, all vm's can communicate on eth0 (primary) but if eth0 is unplugged vm's no longer can communicate. dmesg says:
e1000: eth0: e1000_watchdog: NIC Link is Down
bonding: bond0: link status definitely down for interface eth0, disabling it
bonding: bond0: making interface eth1 the new active one.
device eth0 left promiscuous mode
device eth1 entered promiscuous mode
So, anyone have any ideas how to get a (linux host) bonding solution going that uses some form of load balancing?