Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

Trunk-bond-vlan-bridge on KVM and LXC host

Linux Kernel, Network, and Services configuration.
Post Reply
Message
Author
aldrtik
Posts: 1
Joined: 2021-06-01 21:07

Trunk-bond-vlan-bridge on KVM and LXC host

#1 Post by aldrtik »

Hello,

in Stretch I have done this (in subject) by Debian way with /etc/network/interfaces and it goes well.
The same configuration in Buster is interpreted differently and doesn't work.

Config from interfaces:

--------------------------------

auto lo
iface lo inet loopback

allow-hotplug ens3f0
iface ens3f0 inet manual

# The primary network interface
allow-hotplug ens3f1
iface ens3f1 inet manual

auto bond0
iface bond0 inet manual
bond-slaves ens3f0 ens3f1
bond-mode 1
# bond-arp-validate all
# Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures.
# bond-miimon 500
bond-downdelay 500
# Specifies the time, in milliseconds, to wait before disabling a slave after a link failure has been detected.
bond-updelay 500
bond-primary ens3f1
bond-arp-interval 5000
bond-arp-ip-target 10.5.75.1

iface bond0.20 inet manual
iface bond0.21 inet manual

auto br0.20
iface br0.20 inet static
address 10.5.75.138
netmask 255.255.255.0
network 10.5.75.0
broadcast 10.5.75.255
gateway 10.5.75.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 10.5.64.4 10.5.65.4
dns-search domain.org
bridge_ports bond0.20
bridge_fd 0
bridge_maxwait 0
bridge_stp off


auto br0.21
iface br0.21 inet manual
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 10.5.64.4 10.5.65.4
dns-search domain.org
bridge_ports bond0.21



auto br0
iface br0 inet manual
bridge_ports bond0
bridge_fd 0
bridge_maxwait 0
bridge_stp off
bridge_vlan_aware yes

-------------------------------------------

Interfaces in Stretch are generated :

bridge link

11: bond0 state UP : <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2
12: bond0.20 state UP @bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0.20 state forwarding priority 32 cost 2
14: bond0.21 state UP @bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0.21 state forwarding priority 32 cost 2



brctl show

bridge name bridge id STP enabled interfaces
br0 8000.001b21da10c0 no bond0
br0.20 8000.001b21da10c0 no bond0.20
br0.21 8000.001b21da10c0 no bond0.21


ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 00:1b:21:da:10:c0 brd ff:ff:ff:ff:ff:ff
5: ens3f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 00:1b:21:da:10:c0 brd ff:ff:ff:ff:ff:ff
11: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether 00:1b:21:da:10:c0 brd ff:ff:ff:ff:ff:ff
12: bond0.20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0.20 state UP group default qlen 1000
link/ether 00:1b:21:da:10:c0 brd ff:ff:ff:ff:ff:ff
13: br0.20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:1b:21:da:10:c0 brd ff:ff:ff:ff:ff:ff
inet 10.5.75.137/24 brd 10.5.75.255 scope global br0.20
valid_lft forever preferred_lft forever
inet6 fe80::21b:21ff:feda:10c0/64 scope link
valid_lft forever preferred_lft forever
14: bond0.21@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0.21 state UP group default qlen 1000
link/ether 00:1b:21:da:10:c0 brd ff:ff:ff:ff:ff:ff
15: br0.21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:1b:21:da:10:c0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::21b:21ff:feda:10c0/64 scope link
valid_lft forever preferred_lft forever
16: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:1b:21:da:10:c0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::21b:21ff:feda:10c0/64 scope link
valid_lft forever preferred_lft forever


On Stretch this config operates as the first case of "Adding VLANs to the mix – the usual guest access mode" of DAVID VASSALLO'S BLOG
https://blog.davidvassallo.me/2012/05/0 ... he-guests/

But in Buster devices generated as:

root@blade2:~# bridge link
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2

root@blade2:~# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.9295da8a5326 no bond0

root@blade2:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: ens3f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 92:95:da:8a:53:26 brd ff:ff:ff:ff:ff:ff
4: ens3f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 92:95:da:8a:53:26 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether 92:95:da:8a:53:26 brd ff:ff:ff:ff:ff:ff
9: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 92:95:da:8a:53:26 brd ff:ff:ff:ff:ff:ff
inet6 fe80::9095:daff:fe8a:5326/64 scope link
valid_lft forever preferred_lft forever
10: br0.20@br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 92:95:da:8a:53:26 brd ff:ff:ff:ff:ff:ff
inet 10.5.75.138/24 brd 10.5.75.255 scope global br0.20
valid_lft forever preferred_lft forever
inet6 fe80::9095:daff:fe8a:5326/64 scope link
valid_lft forever preferred_lft forever
11: br0.21@br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 92:95:da:8a:53:26 brd ff:ff:ff:ff:ff:ff
inet6 fe80::9095:daff:fe8a:5326/64 scope link
valid_lft forever preferred_lft forever

Due to this different interpretation of the same config in Buster packets are lost:

ping -c 10 10.5.75.138
PING 10.5.75.138 (10.5.75.138) 56(84) bytes of data.
64 bytes from 10.5.75.138: icmp_seq=1 ttl=64 time=0.152 ms
64 bytes from 10.5.75.138: icmp_seq=2 ttl=64 time=0.103 ms
64 bytes from 10.5.75.138: icmp_seq=3 ttl=64 time=0.154 ms
64 bytes from 10.5.75.138: icmp_seq=4 ttl=64 time=0.149 ms
64 bytes from 10.5.75.138: icmp_seq=5 ttl=64 time=0.107 ms
64 bytes from 10.5.75.138: icmp_seq=9 ttl=64 time=0.151 ms
64 bytes from 10.5.75.138: icmp_seq=10 ttl=64 time=0.283 ms

--- 10.5.75.138 ping statistics ---

10 packets transmitted, 7 received, 30% packet loss, time 9219ms
rtt min/avg/max/mdev = 0.103/0.157/0.283/0.055 ms


Good question:
Does anybody know why Buster interprets the same config bad way (generated devices are not the same) so function is not correct?


Thanks for insight.
Ales

Post Reply