Konfigurasi Networking Bonding

From OnnoWiki
Jump to navigation Jump to search

Debian / Ubuntu Linux: Configure Network Bonding [ Teaming / Aggregating NIC ] by NIXCRAFT on SEPTEMBER 4, 2011 · 15 COMMENTS· LAST UPDATED SEPTEMBER 6, 2011

NIC teaming is nothing but combining or aggregating multiple network connections in parallel. This is done to increase throughput, and to provide redundancy in case one of the links fails or Ethernet card fails. The Linux kernel comes with the bounding driver for aggregating multiple network interfaces into a single logical interface called bond0. In this tutorial, I will explain how to setup bonding under Debian Linux server to aggregate multiple Ethernet devices into a single link, to get higher data rates and link failover.


The instructions were tested using the following setup:

2 x PCI-e Gig NIC with jumbo frames. RAID 6 w/ 5 enterprise grade 15k SAS hard disks. Debian Linux 6.0.2 amd64 Please note that the following instructions should also work on Ubuntu Linux server.

Required Software

You need to install the following tool:

ifenslave command: It is used to attach and detach slave network devices to a bonding device. A bonding device will act like a normal Ethernet network device to the kernel, but will send out the packets via the slave devices using a simple round-robin scheduler. This allows for simple load-balancing, identical to "channel bonding" or "trunking" techniques used in network switches. Our Sample Setup

Internet

|                  202.54.1.1 (eth0)

ISP Router/Firewall 192.168.1.254 (eth1)

  \
    \                             +------ Server 1 (Debian file server w/ eth0 & eth1) 192.168.1.10
     +------------------+         |
     | Gigabit Ethernet |---------+------ Server 2 (MySQL) 192.168.1.11
     | with Jumbo Frame |         |
     +------------------+         +------ Server 3 (Apache) 192.168.1.12
                                  |
                                  +-----  Server 4 (Proxy/SMTP/DHCP etc) 192.168.1.13
                                  |
                                  +-----  Desktop PCs / Other network devices (etc)

Install ifenslave

Use the apt-get command to install ifenslave, enter:

  1. apt-get install ifenslave-2.6

Sample outputs:

Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'ifenslave-2.6' instead of 'ifenslave' The following NEW packages will be installed:

 ifenslave-2.6

0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 18.4 kB of archives. After this operation, 143 kB of additional disk space will be used. Get:1 http://mirror.anl.gov/debian/ squeeze/main ifenslave-2.6 amd64 1.1.0-17 [18.4 kB] Fetched 18.4 kB in 1s (10.9 kB/s) Selecting previously deselected package ifenslave-2.6. (Reading database ... 24191 files and directories currently installed.) Unpacking ifenslave-2.6 (from .../ifenslave-2.6_1.1.0-17_amd64.deb) ... Processing triggers for man-db ... Setting up ifenslave-2.6 (1.1.0-17) ... update-alternatives: using /sbin/ifenslave-2.6 to provide /sbin/ifenslave (ifenslave) in auto mode. Linux bounding Driver Configuration

Create a file called /etc/modprobe.d/bonding.conf, enter:

  1. vi /etc/modprobe.d/bonding.conf

Append the following

alias bond0 bonding

 options bonding mode=0 arp_interval=100 arp_ip_target=192.168.1.254, 192.168.1.12

Save and close the file. This configuration file is used by the Linux kernel driver called bounding. The options are important here:

mode=0 : Set the bonding policies to balance-rr (round robin). This is default. This mode provides load balancing and fault tolerance. arp_interval=100 : Set the ARP link monitoring frequency to 100 milliseconds. Without option you will get various warning when start bond0 via /etc/network/interfaces. arp_ip_target=192.168.1.254, 192.168.1.12 : Use the 192.168.1.254 (router ip) and 192.168.1.2 IP addresses to use as ARP monitoring peers when arp_interval is > 0. This is used determine the health of the link to the targets. Multiple IP addresses must be separated by a comma. At least one IP address must be given (usually I set it to router IP) for ARP monitoring to function. The maximum number of targets that can be specified is 16. How Do I Load the Driver?

Type the following command

  1. modprobe -v bonding mode=0 arp_interval=100 arp_ip_target=192.168.1.254, 192.168.1.12
  2. tail -f /var/log/messages
  3. ifconfig bond0

Interface Bonding (Teaming) Configuration

First, stop eth0 and eth1 (do not type this over an ssh session), enter:

  1. /etc/init.d/networking stop

You need to modify /etc/network/interfaces file, enter:

  1. cp /etc/network/interfaces /etc/network/interfaces.bak
  2. vi /etc/network/interfaces

Remove eth0 and eth1 static IP configuration and update the file as follows:


                        1. WARNING ####################
  1. You do not need an "iface eth0" nor an "iface eth1" stanza.
  2. Setup IP address / netmask / gateway as per your requirements.

auto lo iface lo inet loopback

  1. The primary network interface

auto bond0 iface bond0 inet static

   address 192.168.1.10
   netmask 255.255.255.0
   network 192.168.1.0
   gateway 192.168.1.254
   slaves eth0 eth1
   # jumbo frame support
   mtu 9000
   # Load balancing and fault tolerance
   bond-mode balance-rr
   bond-miimon 100
   bond-downdelay 200
   bond-updelay 200
   dns-nameservers 192.168.1.254
   dns-search nixcraft.net.in

Save and close the file. Where,

address 192.168.1.10 : Dotted quad ip address for bond0. netmask 255.255.255.0 : Dotted quad netnask for bond0. network 192.168.1.0 : Dotted quad network address for bond0. gateway 192.168.1.254 : Default gateway for bond0. slaves eth0 eth1 : Setup a bonding device and enslave two real Ethernet devices (eth0 and eth1) to it. mtu 9000 : Set MTU size to 9000. See Linux JumboFrames configuration for more information. bond-mode balance-rr : Set bounding mode profiles to "Load balancing and fault tolerance". See below for more information. bond-miimon 100 : Set the MII link monitoring frequency to 100 milliseconds. This determines how often the link state of each slave is inspected for link failures. bond-downdelay 200 : Set the time, t0 200 milliseconds, to wait before disabling a slave after a link failure has been detected. This option is only valid for the bond-miimon. bond-updelay 200 : Set the time, to 200 milliseconds, to wait before enabling a slave after a link recovery has been detected. This option is only valid for the bond-miimon. dns-nameservers 192.168.1.254 : Use 192.168.1.254 as dns server. dns-search nixcraft.net.in : Use nixcraft.net.in as default host-name lookup (optional). A Note About Various Bonding Policies

In the above example bounding policy (mode) is set to 0 or balance-rr. Other possible values are as follows:

The Linux bonding driver aggregating policies Bonding policies (mode) Description balance-rr or 0 Round-robin policy to transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance. active-backup or 1 Active-backup policy. Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. This mode provides fault tolerance. balance-xor or 2 Transmit based on the selected transmit hash policy. The default policy is a simple [(source MAC address XOR'd with destination MAC address) modulo slave count]. This mode provides load balancing and fault tolerance. broadcast or 3 Transmits everything on all slave interfaces. This mode provides fault tolerance. 802.3ad or 4 Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. Most network switches will require some type of configuration to enable 802.3ad mode. balance-tlb or 5 Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave. balance-alb or 6 Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. [ Source: See Documentation/networking/bonding.txt for more information. ]

Start bond0 Interface

Now, all configuration files have been modified, and networking service must be started or restarted, enter:

  1. /etc/init.d/networking start

OR

  1. /etc/init.d/networking stop && /etc/init.d/networking start

Verify New Settings

Type the following commands:

  1. /sbin/ifconfig

Sample outputs:

bond0 Link encap:Ethernet HWaddr 00:xx:yy:zz:tt:31

         inet addr:192.168.1.10  Bcast:192.168.1.255  Mask:255.255.255.0
         inet6 addr: fe80::208:9bff:fec4:3031/64 Scope:Link
         UP BROADCAST RUNNING MASTER MULTICAST  MTU:9000  Metric:1
         RX packets:2414 errors:0 dropped:0 overruns:0 frame:0
         TX packets:1559 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:0
         RX bytes:206515 (201.6 KiB)  TX bytes:480259 (469.0 KiB)

eth0 Link encap:Ethernet HWaddr 00:xx:yy:zz:tt:31

         UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
         RX packets:1214 errors:0 dropped:0 overruns:0 frame:0
         TX packets:782 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:103318 (100.8 KiB)  TX bytes:251419 (245.5 KiB)
         Memory:fe9e0000-fea00000

eth1 Link encap:Ethernet HWaddr 00:xx:yy:zz:tt:31

         UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
         RX packets:1200 errors:0 dropped:0 overruns:0 frame:0
         TX packets:777 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:103197 (100.7 KiB)  TX bytes:228840 (223.4 KiB)
         Memory:feae0000-feb00000

lo Link encap:Local Loopback

         inet addr:127.0.0.1  Mask:255.0.0.0
         inet6 addr: ::1/128 Scope:Host
         UP LOOPBACK RUNNING  MTU:16436  Metric:1
         RX packets:8 errors:0 dropped:0 overruns:0 frame:0
         TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:0
         RX bytes:560 (560.0 B)  TX bytes:560 (560.0 B)

How Do I Verify Current Link Status?

Use the cat command command to see current status of bounding driver and nic links:

  1. cat /proc/net/bonding/bond0

Sample outputs:

Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 200 Down Delay (ms): 200 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:xx:yy:zz:tt:31 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:xx:yy:zz:tt:30 Example: Link Failure

The contents of /proc/net/bonding/bond0 after the link failure:

  1. cat /proc/net/bonding/bond0

Sample outputs:

Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 200 Down Delay (ms): 200 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:xx:yy:zz:tt:31 Slave Interface: eth1 MII Status: down Link Failure Count: 1 Permanent HW addr: 00:xx:yy:zz:tt:30 You will also see the following information in your /var/log/messages file:

Sep 5 04:16:15 nas01 kernel: [ 6271.468218] e1000e: eth1 NIC Link is Down Sep 5 04:16:15 nas01 kernel: [ 6271.548027] bonding: bond0: link status down for interface eth1, disabling it in 200 ms. Sep 5 04:16:15 nas01 kernel: [ 6271.748018] bonding: bond0: link status definitely down for interface eth1, disabling it However, your nas01 server should work without any problem as eth0 link is still up and running. Next, replace the faulty network card, connect the cable, and you will see the following message in your /var/log/messages file:

Sep 5 04:20:21 nas01 kernel: [ 6517.492974] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX Sep 5 04:20:21 nas01 kernel: [ 6517.548029] bonding: bond0: link status up for interface eth1, enabling it in 200 ms. Sep 5 04:20:21 nas01 kernel: [ 6517.748016] bonding: bond0: link status definitely up for interface eth1.


You should follow me on twitter here or grab rss feed to keep track of new changes.

This blog post is 2 of 2 in the "Linux NIC Interface Bonding (aggregate multiple links) Tutorial" series. Keep reading the rest of the series:




Referensi

Pranala Menarik