I recently needed to configure bonding between 2 network cards on a customer side and I wanted trough this blog to share my findings and how I built it showing some traces. I will also do a short comparison of what is possible or not on the ODA.
Why should I use bonding?
Bonding is a technology which will allow you to merge several network interfaces, either ports of the same cards or ports from separated network cards, into a same logical interface. Purposes would be to have some network redundancy in case of network failure, called fault tolerance, or to increase the network throughput (bandwidth), called load balancing.
What bonding mode should I use?
There are 7 bonding modes available to achieve these purposes. All bonding modes will guarantee fault tolerance. Some bonding modes will have load balancing functionnalities. For bonding mode 4 the switch will need to support links aggregation (EtherChannel). Link aggregation can be configured manually on the switch or automatically using LACP protocol (dynamic links aggregation).
Mode | Description | Fault tolerance | Load balancing | |
0 | Round-Robin | Packets are sequentially transmitted and received through each interfaces one by one. | YES | YES |
1 | Active-backup | Only one interface will be the active one. The other interface from the bonding configuration will be configured as backup. If the active interface will be in failure one of the backup interface will become the active one. The MAC address will only be visible on one port at the same time to avoid any confusion for the switch. | YES | NO |
2 | Balance-xor | Peer connections are matched with MAC addresses of the slave interfaces. Once the connection is established the transmission of the peers is always sent over the same slave interface. | YES | YES |
3 | Broadcast | All network transmissions are sent on all slaves. | YES | NO |
4 | 802.3ad – Dynamic Link Aggregation | This mode will aggregate all interfaces from the bonding into a logical one. The traffic is sent and received on all slaves from the aggregation. The switch needs to support LACP and LACP needs to be activated. | YES | YES |
5 | TLB – Transmit Load Balancing | The outgoing traffic is distributed between all interfaces depending of the current load of each slave interface. Incoming traffic is received by the current active slave. In case the active interface fails, another slave will take over the MAC address of the failed interface. | YES | YES |
6 | ALB – Adaptive Load Balancing | This mode includes TLB (Transmit Load Balancing) and will use RLB (Receive Load Balancing) as well. The load balanced for the received packets will be done through ARP (Address Resolution Protocol) negotiation. | YES | YES |
In my case, our customer wanted to guarantee the service in case of one network card failure only. No load balancing. The switch was not configured to use LACP. I then decided to configure the bonding in active-backup mode, which will guarantee redundancy only.
Bonding configuration
Checking existing connection
The server is composed of 2 network cards having each of the card 4 interfaces (ports).
Card 1 : em1, em2, em3, em4
Card 2 : p4p1, p4p2, p4p3, p4p4
There is no bonding currently existing as shown in below output.
[root@SRV ~]# nmcli connection NAME UUID TYPE DEVICE p4p1 d3cdc8f5-2d80-433d-9502-3b357c57f307 ethernet p4p1 em1 f412b74b-2160-4914-b716-88f6b4d58c1f ethernet -- em2 0ab78e63-bde7-4c77-b455-7dcb1d5c6813 ethernet -- em3 d6569615-322f-477b-9693-b42ee3dbe21e ethernet -- em4 52949f94-52d1-463e-ba32-06c272c07ce0 ethernet -- p4p2 12f01c70-4aab-42db-b0e8-b5422e43c1b9 ethernet -- p4p3 0db2f5b9-d968-44cb-a042-cff20f112ed4 ethernet -- p4p4 a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28 ethernet --
Checking existing configuration
The server was configured only with one IP address on the p4p1 network interface.
[root@SRV network-scripts]# pwd /etc/sysconfig/network-scripts [root@SRV network-scripts]# ls -l ifcfg* -rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em1 -rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em2 -rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em3 -rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em4 -rw-r--r--. 1 root root 254 Aug 19 2019 ifcfg-lo -rw-r--r--. 1 root root 378 Sep 21 17:09 ifcfg-p4p1 -rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p2 -rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p3 -rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p4 [root@SRV network-scripts]# more ifcfg-p4p1 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=p4p1 UUID=d3cdc8f5-2d80-433d-9502-3b357c57f307 DEVICE=p4p1 ONBOOT=yes IPADDR=192.168.1.180 PREFIX=24 GATEWAY=192.168.1.1 DNS1=192.168.1.5 DOMAIN=domain.com IPV6_PRIVACY=no
Creating the bonding
Purpose is to create a bonding between the 2 network cards for fault tolerance. The bonding will then be composed of the slave interfaces p4p1 and em1.
The bonding mode selected will be the mode 1 (active-backup).
[root@SRV network-scripts]# nmcli con add type bond con-name bond1 ifname bond1 mode active-backup ip4 192.168.1.180/24 Connection 'bond1' (7b736616-f72d-46b7-b4eb-01468639889b) successfully added. [root@SRV network-scripts]# nmcli conn NAME UUID TYPE DEVICE p4p1 d3cdc8f5-2d80-433d-9502-3b357c57f307 ethernet p4p1 bond1 7b736616-f72d-46b7-b4eb-01468639889b bond bond1 em1 f412b74b-2160-4914-b716-88f6b4d58c1f ethernet -- em2 0ab78e63-bde7-4c77-b455-7dcb1d5c6813 ethernet -- em3 d6569615-322f-477b-9693-b42ee3dbe21e ethernet -- em4 52949f94-52d1-463e-ba32-06c272c07ce0 ethernet -- p4p2 12f01c70-4aab-42db-b0e8-b5422e43c1b9 ethernet -- p4p3 0db2f5b9-d968-44cb-a042-cff20f112ed4 ethernet -- p4p4 a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28 ethernet --
Updating the bonding with appropriate gateway, dns and domain information
[root@SRV network-scripts]# cat ifcfg-bond1 BONDING_OPTS=mode=active-backup TYPE=Bond BONDING_MASTER=yes PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none IPADDR=192.168.1.180 PREFIX=24 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=bond1 UUID=7b736616-f72d-46b7-b4eb-01468639889b DEVICE=bond1 ONBOOT=yes [root@SRV network-scripts]# vi ifcfg-bond1 [root@SRV network-scripts]# cat ifcfg-bond1 BONDING_OPTS=mode=active-backup TYPE=Bond BONDING_MASTER=yes PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none IPADDR=192.168.1.180 PREFIX=24 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=bond1 UUID=7b736616-f72d-46b7-b4eb-01468639889b DEVICE=bond1 ONBOOT=yes GATEWAY=192.168.1.1 DNS1=192.168.1.5 DOMAIN=domain.com
Adding slave interface em1 in the bonding bond1
Each slaves needs to be added to the master bonding.
We will first delete existing em1 slave :
[root@SRV network-scripts]# nmcli con delete em1 Connection 'em1' (f412b74b-2160-4914-b716-88f6b4d58c1f) successfully deleted.
We will then create new em1 interface part of the bond1 bonding configuration :
[root@SRV network-scripts]# nmcli con add type bond-slave ifname em1 con-name em1 master bond1 Connection 'em1' (8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b) successfully added.
And we can check the interfaces :
[root@SRV network-scripts]# nmcli con NAME UUID TYPE DEVICE p4p1 d3cdc8f5-2d80-433d-9502-3b357c57f307 ethernet p4p1 bond1 7b736616-f72d-46b7-b4eb-01468639889b bond bond1 em1 8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b ethernet em1 em2 0ab78e63-bde7-4c77-b455-7dcb1d5c6813 ethernet -- em3 d6569615-322f-477b-9693-b42ee3dbe21e ethernet -- em4 52949f94-52d1-463e-ba32-06c272c07ce0 ethernet -- p4p2 12f01c70-4aab-42db-b0e8-b5422e43c1b9 ethernet -- p4p3 0db2f5b9-d968-44cb-a042-cff20f112ed4 ethernet -- p4p4 a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28 ethernet --
Activating the bonding
We need to first activate the first configured slaves :
[root@SRV network-scripts]# nmcli con up em1 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)
We can now activate the bonding :
[root@SRV network-scripts]# nmcli con up bond1 Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
We can check the connections :
[root@SRV network-scripts]# nmcli con NAME UUID TYPE DEVICE p4p1 d3cdc8f5-2d80-433d-9502-3b357c57f307 ethernet p4p1 bond1 7b736616-f72d-46b7-b4eb-01468639889b bond bond1 em1 8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b ethernet em1 em2 0ab78e63-bde7-4c77-b455-7dcb1d5c6813 ethernet -- em3 d6569615-322f-477b-9693-b42ee3dbe21e ethernet -- em4 52949f94-52d1-463e-ba32-06c272c07ce0 ethernet -- p4p2 12f01c70-4aab-42db-b0e8-b5422e43c1b9 ethernet -- p4p3 0db2f5b9-d968-44cb-a042-cff20f112ed4 ethernet -- p4p4 a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28 ethernet --
Adding slave interface p4p1 in the bonding bond1
We will first delete existing p4p1 slave :
[root@SRV network-scripts]# nmcli con delete p4p1 Connection 'p4p1' (d3cdc8f5-2d80-433d-9502-3b357c57f307) successfully deleted. [root@SRV network-scripts]# nmcli con NAME UUID TYPE DEVICE bond1 7b736616-f72d-46b7-b4eb-01468639889b bond bond1 em1 8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b ethernet em1 em2 0ab78e63-bde7-4c77-b455-7dcb1d5c6813 ethernet -- em3 d6569615-322f-477b-9693-b42ee3dbe21e ethernet -- em4 52949f94-52d1-463e-ba32-06c272c07ce0 ethernet -- p4p2 12f01c70-4aab-42db-b0e8-b5422e43c1b9 ethernet -- p4p3 0db2f5b9-d968-44cb-a042-cff20f112ed4 ethernet -- p4p4 a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28 ethernet --
We will then create new p4p1 interface part of the bond1 bonding configuration :
[root@SRV network-scripts]# nmcli con add type bond-slave ifname p4p1 con-name p4p1 master bond1 Connection 'p4p1' (efef0972-4b3f-46a2-b054-ebd1aa201056) successfully added.
And we can check the interfaces :
[root@SRV network-scripts]# nmcli con NAME UUID TYPE DEVICE bond1 7b736616-f72d-46b7-b4eb-01468639889b bond bond1 em1 8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b ethernet em1 p4p1 efef0972-4b3f-46a2-b054-ebd1aa201056 ethernet p4p1 em2 0ab78e63-bde7-4c77-b455-7dcb1d5c6813 ethernet -- em3 d6569615-322f-477b-9693-b42ee3dbe21e ethernet -- em4 52949f94-52d1-463e-ba32-06c272c07ce0 ethernet -- p4p2 12f01c70-4aab-42db-b0e8-b5422e43c1b9 ethernet -- p4p3 0db2f5b9-d968-44cb-a042-cff20f112ed4 ethernet -- p4p4 a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28 ethernet --
Activating the new p4p1 slave interface
We can now activate the next recently added slaves :
[root@SRV network-scripts]# nmcli con up p4p1 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/11)
Restart the network service
We will restart the network service to have the new bonding configuration taking into account :
[root@SRV network-scripts]# service network restart Restarting network (via systemctl): [ OK ]
We can check the IP configuration :
[root@SRV network-scripts]# ip addr sh 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: mtu 1500 qdisc mq master bond1 state UP group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff 3: em3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:4e brd ff:ff:ff:ff:ff:ff 4: em2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:51 brd ff:ff:ff:ff:ff:ff 5: em4: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:4f brd ff:ff:ff:ff:ff:ff 6: p4p1: mtu 1500 qdisc mq master bond1 state UP group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff 7: p4p2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:31 brd ff:ff:ff:ff:ff:ff 8: p4p3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:32 brd ff:ff:ff:ff:ff:ff 9: p4p4: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:33 brd ff:ff:ff:ff:ff:ff 11: bond1: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff inet 192.168.1.180/24 brd 192.168.1.255 scope global noprefixroute bond1 valid_lft forever preferred_lft forever inet6 fe80::b4f9:e44d:25fc:3a6/64 scope link noprefixroute valid_lft forever preferred_lft forever
Check IP configuration files
We are now having our bond ifcfg configuration file :
[root@SRV ~]# cd /etc/sysconfig/network-scripts [root@SRV network-scripts]# pwd /etc/sysconfig/network-scripts [root@SRV network-scripts]# ls -ltrh ifcfg* -rw-r--r--. 1 root root 254 Aug 19 2019 ifcfg-lo -rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p4 -rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p2 -rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em4 -rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em3 -rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p3 -rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em2 -rw-r--r--. 1 root root 411 Oct 7 16:45 ifcfg-bond1 -rw-r--r--. 1 root root 110 Oct 7 16:46 ifcfg-em1 -rw-r--r--. 1 root root 112 Oct 7 16:50 ifcfg-p4p1
The bonding file will have the IP configuration :
[root@SRV network-scripts]# cat ifcfg-bond1 BONDING_OPTS=mode=active-backup TYPE=Bond BONDING_MASTER=yes PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none IPADDR=192.168.1.180 PREFIX=24 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=bond1 UUID=7b736616-f72d-46b7-b4eb-01468639889b DEVICE=bond1 ONBOOT=yes GATEWAY=192.168.1.1 DNS1=192.168.1.5 DOMAIN=domain.com
p4p1 interface will be one of the bond1 slave :
[root@SRV network-scripts]# cat ifcfg-p4p1 TYPE=Ethernet NAME=p4p1 UUID=efef0972-4b3f-46a2-b054-ebd1aa201056 DEVICE=p4p1 ONBOOT=yes MASTER=bond1 SLAVE=yes
em1 interface from the other physical network card will be the next bond1 slave :
[root@SRV network-scripts]# cat ifcfg-em1 TYPE=Ethernet NAME=em1 UUID=8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b DEVICE=em1 ONBOOT=yes MASTER=bond1 SLAVE=yes
Check bonding interfaces and mode
[root@SRV network-scripts]# cat /proc/net/bonding/bond1 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: em1 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: em1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 1 Permanent HW addr: bc:97:e1:5b:e4:50 Slave queue ID: 0 Slave Interface: p4p1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 1 Permanent HW addr: 3c:fd:fe:85:0d:30 Slave queue ID: 0 [root@SRV network-scripts]#
Test the bonding
Both network cables are plugged into em1 and p4p1. Both interfaces are UP. :
[root@SRV network-scripts]# ip addr sh 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: mtu 1500 qdisc mq master bond1 state UP group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff 3: em3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:4e brd ff:ff:ff:ff:ff:ff 4: em2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:51 brd ff:ff:ff:ff:ff:ff 5: em4: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:4f brd ff:ff:ff:ff:ff:ff 6: p4p1: mtu 1500 qdisc mq master bond1 state UP group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff 7: p4p2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:31 brd ff:ff:ff:ff:ff:ff 8: p4p3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:32 brd ff:ff:ff:ff:ff:ff 9: p4p4: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:33 brd ff:ff:ff:ff:ff:ff 15: bond1: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff inet 192.168.1.180/24 brd 192.168.1.255 scope global noprefixroute bond1 valid_lft forever preferred_lft forever inet6 fe80::b4f9:e44d:25fc:3a6/64 scope link noprefixroute valid_lft forever preferred_lft forever
Pinging the server is OK :
[ansible@linux-ansible / ]$ ping 192.168.1.180 PING 192.168.1.180 (192.168.1.180) 56(84) bytes of data. 64 bytes from 192.168.1.180: icmp_seq=1 ttl=64 time=0.206 ms 64 bytes from 192.168.1.180: icmp_seq=2 ttl=64 time=0.290 ms 64 bytes from 192.168.1.180: icmp_seq=3 ttl=64 time=0.152 ms 64 bytes from 192.168.1.180: icmp_seq=4 ttl=64 time=0.243 ms
I have plug out the cable from the em1 interface. We can see em1 interface DOWN and p4p1 interface UP :
[root@SRV network-scripts]# ip addr sh 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: mtu 1500 qdisc mq master bond1 state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff 3: em3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:4e brd ff:ff:ff:ff:ff:ff 4: em2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:51 brd ff:ff:ff:ff:ff:ff 5: em4: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:4f brd ff:ff:ff:ff:ff:ff 6: p4p1: mtu 1500 qdisc mq master bond1 state UP group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff 7: p4p2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:31 brd ff:ff:ff:ff:ff:ff 8: p4p3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:32 brd ff:ff:ff:ff:ff:ff 9: p4p4: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:33 brd ff:ff:ff:ff:ff:ff 15: bond1: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff inet 192.168.1.180/24 brd 192.168.1.255 scope global noprefixroute bond1 valid_lft forever preferred_lft forever inet6 fe80::b4f9:e44d:25fc:3a6/64 scope link noprefixroute valid_lft forever preferred_lft forever
pinging the server is still OK :
[ansible@linux-ansible / ]$ ping 192.168.1.180 PING 192.168.1.180 (192.168.1.180) 56(84) bytes of data. 64 bytes from 192.168.1.180: icmp_seq=1 ttl=64 time=0.234 ms 64 bytes from 192.168.1.180: icmp_seq=2 ttl=64 time=0.256 ms 64 bytes from 192.168.1.180: icmp_seq=3 ttl=64 time=0.257 ms 64 bytes from 192.168.1.180: icmp_seq=4 ttl=64 time=0.245 ms
I have then plug in the cable in em1 interface again and plug out the cable from the p4p1 interface. We can see em1 interface now UP again and p4p1 interface DOWN :
[root@SRV network-scripts]# ip addr sh 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: mtu 1500 qdisc mq master bond1 state UP group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff 3: em3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:4e brd ff:ff:ff:ff:ff:ff 4: em2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:51 brd ff:ff:ff:ff:ff:ff 5: em4: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:4f brd ff:ff:ff:ff:ff:ff 6: p4p1: mtu 1500 qdisc mq master bond1 state DOWN group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff 7: p4p2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:31 brd ff:ff:ff:ff:ff:ff 8: p4p3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:32 brd ff:ff:ff:ff:ff:ff 9: p4p4: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 3c:fd:fe:85:0d:33 brd ff:ff:ff:ff:ff:ff 15: bond1: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff inet 192.168.1.180/24 brd 192.168.1.255 scope global noprefixroute bond1 valid_lft forever preferred_lft forever inet6 fe80::b4f9:e44d:25fc:3a6/64 scope link noprefixroute valid_lft forever preferred_lft forever
pinging the server is still OK :
[ansible@linux-ansible / ]$ ping 192.168.1.180 PING 192.168.1.180 (192.168.1.180) 56(84) bytes of data. 64 bytes from 192.168.1.180: icmp_seq=1 ttl=64 time=0.159 ms 64 bytes from 192.168.1.180: icmp_seq=2 ttl=64 time=0.219 ms 64 bytes from 192.168.1.180: icmp_seq=3 ttl=64 time=0.362 ms 64 bytes from 192.168.1.180: icmp_seq=4 ttl=64 time=0.236 ms
And what about the ODA?
This configuration has been setup at one customer system running DELL servers. I have been deploying several ODAs by other customers and the questionning of having fault tolerance between several network cards is often coming. Unfortunately, and albeit the ODA are running Oracle Linux operation system, such configuration is not supported on the appliance. The Appliance will only support active-backup between ports of the same network cards. Additionnal network cards will be used on the ODA to have additionnal network connections. Last but not least, LACP is not supported on the appliance.
Cet article Building a network bonding between 2 cards on Oracle Linux est apparu en premier sur Blog dbi services.