Sunday 30 March 2014

How To Configure Iscsi Storage On Redhat Linux


Show current installed redhat version and update

[root@BobTestBox ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.7 (Tikanga)

Display Architecture
[root@BobTestBox ~]# getconf LONG_BIT
64
Check isci initiator installed or not (You need iscsi-initiator-utils-6.2.0.742-0.6.el5 or greater)

[root@BobTestBox ~]# rpm -qa | grep -i iscsi
iscsi-initiator-utils-6.2.0.872-10.el5

Display interfaces and IP address details

[root@BobTestBox ~]# ifconfig -a | more
eth0 Link encap:Ethernet HWaddr 00:50:56:AB:00:E7
inet addr:172.28.*.* Bcast:172.28.12.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2503 errors:0 dropped:0 overruns:0 frame:0
TX packets:838 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:196784 (192.1 KiB) TX bytes:98212 (95.9 KiB)

eth1 Link encap:Ethernet HWaddr 00:50:56:AB:00:E8
inet addr:172.28.*.* Bcast:172.28.40.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4237 errors:0 dropped:0 overruns:0 frame:0
TX packets:3807 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:966177 (943.5 KiB) TX bytes:359247 (350.8 KiB)

eth2 Link encap:Ethernet HWaddr 00:50:56:AB:01:00
inet addr:172.28.*.* Bcast:172.28.40.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4387 errors:0 dropped:0 overruns:0 frame:0
TX packets:3960 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:894415 (873.4 KiB) TX bytes:384013 (375.0 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:660 (660.0 b) TX bytes:660 (660.0 b)

Creating the interface files for MPIO
[root@BobTestBox ~]# iscsiadm -m iface -I eth1 -o new
New interface eth1 added
[root@BobTestBox ~]# iscsiadm -m iface -I eth2 -o new
New interface eth2 added

Updating the interface name for each port

[root@BobTestBox ~]# iscsiadm -m iface -I eth1 -o update -n iface.net_ifacename -v eth1
eth1 updated.
[root@BobTestBox ~]# iscsiadm -m iface -I eth2 -o update -n iface.net_ifacename -v eth2
eth2 updated.
[root@BobTestBox ~]# cat /var/lib/iscsi/ifaces/eth1
# BEGIN RECORD 2.0-872
iface.iscsi_ifacename = eth1
iface.net_ifacename = eth1
iface.transport_name = tcp
# END RECORD
[root@localhost ~]# cat /var/lib/iscsi/ifaces/eth2
# BEGIN RECORD 2.0-872
iface.iscsi_ifacename = eth2
iface.net_ifacename = eth2
iface.transport_name = tcp
# END RECORD

Iscsi target Discovering using iscsiadm command

[root@BobTestBox ~]# iscsiadm -m discovery -t st -p 172.28.*.*:3260
172.28.*.*:3260,1 iqn.2001-05.com.equallogic:0-8a0906-a146b2a07-57858e634784f7b0-bobrhel
172.28.*.*:3260,1 iqn.2001-05.com.equallogic:0-8a0906-a146b2a07-57858e634784f7b0-bobrhel

Logging into target ( You have option to login individually also)

[root@BobTestBox ~]# iscsiadm -m node -l
Logging in to [iface: eth2, target: iqn.2001-05.com.equallogic:0-8a0906-a146b2a07-57858e634784f7b0-bobrhel, portal: 172.28.*.*,3260]
Logging in to [iface: default, target: iqn.2001-05.com.equallogic:0-8a0906-a146b2a07-57858e634784f7b0-bobrhel, portal: 172.28.*.*,3260]
Logging in to [iface: eth1, target: iqn.2001-05.com.equallogic:0-8a0906-a146b2a07-57858e634784f7b0-bobrhel, portal: 172.28.*.*,3260]
Login to [iface: eth2, target: iqn.2001-05.com.equallogic:0-8a0906-a146b2a07-57858e634784f7b0-bobrhel, portal: 172.28.*.*,3260] successful.
Login to [iface: default, target: iqn.2001-05.com.equallogic:0-8a0906-a146b2a07-57858e634784f7b0-bobrhel, portal: 172.28.*.*,3260] successful.
Login to [iface: eth1, target: iqn.2001-05.com.equallogic:0-8a0906-a146b2a07-57858e634784f7b0-bobrhel, portal: 172.28.*.*,3260] successful.
[root@BobTestBox ~]#

Display scsi attached device

[root@BobTestBox ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: VMware Model: Virtual disk Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 02
Host: scsi7 Channel: 00 Id: 00 Lun: 00
Vendor: EQLOGIC Model: 100E-00 Rev: 5.2
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi6 Channel: 00 Id: 00 Lun: 00
Vendor: EQLOGIC Model: 100E-00 Rev: 5.2
Type: Direct-Access ANSI SCSI revision: 05
[root@BobTestBox ~]#

Below command will display Iscsi session details

[root@BobTestBox ~]# iscsiadm -m session
tcp: [6] 172.28.*.*:3260,1 iqn.2001-05.com.equallogic:0-8a0906-a146b2a07-57858e634784f7b0-bobrhel
tcp: [7] 172.28.*.*:3260,1 iqn.2001-05.com.equallogic:0-8a0906-a146b2a07-57858e634784f7b0-bobrhel
root@BobTestBox ~]#
Display Disk details
Below command will display linux disk details that are known to kernel
[root@BobTestBox ~]# fdisk -l
Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 574 4610623+ 83 Linux
/dev/sda2 575 652 626535 82 Linux swap / Solaris
Disk /dev/dm-0: 1085 MB, 1085276160 bytes
255 heads, 63 sectors/track, 131 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-0 doesn’t contain a valid partition table
Disk /dev/sde: 1085 MB, 1085276160 bytes
34 heads, 61 sectors/track, 1022 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Disk /dev/sde doesn’t contain a valid partition table
Disk /dev/sdf: 1085 MB, 1085276160 bytes
34 heads, 61 sectors/track, 1022 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes
Disk /dev/sdf doesn’t contain a valid partition table
[root@BobTestBox ~]#

Checking whether Equallogic Hit installed ( I have already installed HIT Kit 1.0.0- #rpm –ivh equallogic-host-tools-1.0.0-1.el5.x86_64.rpm

[root@BobTestBox ~]# rpm -qa | grep -i equa
equallogic-host-tools-1.0.0-1.el5

Hit requires dkms 1.95 (Dynamic kernel module support) installed. ( I have already installed dkms - #rpm –ivh dkms-1.95.32-1.noarch.rpm )
[root@BobTestBox ~]# rpm -qa | grep -i dkms
dkms-1.95.32-1
Dkms module can be download from http://linux.dell.com/dkms/testing/permalink

Restarting EHCMD service (dell multipath module )
[root@BobTestBox ~]# /etc/init.d/ehcmd restart
Stopping ehcmd: [FAILED]
Starting ehcmd: [ OK ]

Restarting Multipath deamon ( You can set alias ,uuid , device details , blacklist node in multhipath.conf file before restarting)

[root@BobTestBox ~]# /etc/init.d/multipathd restart
Stopping multipathd daemon: [FAILED]
Starting multipathd daemon: [ OK ]
[root@BobTestBox ~]#

Display multhipath details

[root@BobTestBox ~]# multipath -ll
Bob_rhel_lun () dm-0 EQLOGIC,100E-00
[size=1.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][enabled]
\_ 3:0:0:0 sdc 8:32 [active][ready]
\_ 1:0:0:0 sdd 8:48 [active][ready]
[root@BobTestBox ~]#

Device mapper target device status

[root@BobTestBox ~]# dmsetup ls --target=multipath
Bob_rhel_lun (253, 0)
[root@BobTestBox ~]#

Mounting the the filesystem. I have already formated the filesystem using mkfs command (mkfs.ext3 /dev/mapper/Bob_rhel_lun)

[root@BobTestBox ~]# mount /dev/mapper/Bob_rhel_lun /data

Display disk usage
[root@BobTestBox ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 4.3G 2.7G 1.5G 66% /
tmpfs 502M 0 502M 0% /dev/shm
/dev/mapper/Bob_rhel_lun 1019M 34M 934M 4% /data
[root@BobTestBox ~]#

For persistence on reboot , added mount point to fstab file

[root@BobTestBox ~]# cat /etc/fstab
LABEL=/ / ext3 defaults 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-sda2 swap swap defaults 0 0
/dev/mapper/Bob_rhel_lun /data ext3 defaults 0 0
[root@BobTestBox ~]#

Thursday 27 March 2014

Linux Copy One Hard Disk to Another Using dd Command

dd command can copy disk in raw format. It will copy partition table and bootloader and all partitions within this disk. For example, you can copy /dev/sda to /dev/sdb (same size) using the following dd command. dd should complete with no errors, but output the number of records read and written.

Login as the root user (use sudo or su command)

Open terminal or shell prompt and type the following dd command:
# dd if=/dev/sda of=/dev/sdb

Where,

if=/dev/sda : Input disk (source)
of=/dev/sdb : Output disk (destination)
You can also copy hard disk or partition image to another system using a network and netcat (nc) command.

Linux change the speed and duplex settings of an Ethernet card


Task: Get speed and other information for eth0

Type following command as root user:
# ethtool eth0
Output:

Settings for eth0:
     Supported ports: [ TP MII ]
     Supported link modes:   10baseT/Half 10baseT/Full
                             100baseT/Half 100baseT/Full
     Supports auto-negotiation: Yes
     Advertised link modes:  10baseT/Half 10baseT/Full
                             100baseT/Half 100baseT/Full
     Advertised auto-negotiation: Yes
     Speed: 100Mb/s
     Duplex: Full
     Port: MII
     PHYAD: 32
     Transceiver: internal
     Auto-negotiation: on
     Supports Wake-on: pumbg
     Wake-on: d
     Current message level: 0x00000007 (7)
     Link detected: yes
Or use mii-tool command as follows:
# mii-tool eth0
Output:

eth0: negotiated 100baseTx-FD flow-control, link ok

Task: Change the speed and duplex settings

Setup eth0 negotiated speed with mii-tool
Disable autonegotiation, and force the MII to either 100baseTx-FD, 100baseTx-HD, 10baseT-FD, or 10baseT-HD:
# mii-tool -F 100baseTx-HD
# mii-tool -F 10baseT-HD
Setup eth0 negotiated speed with ethtool
# ethtool -s eth0 speed 100 duplex full
# ethtool -s eth0 speed 10 duplex half

To make these settings permanent you need to create a shell script and call from /etc/rc.local(Red Hat) or if you are using Debian create a script into the directory /etc/init.d/ directory and run update-rc.d command to update the script.

Linux VLAN configuration

1. Connect the eth0 interface of your linux machine to the switch.
2. Remove the IP Address information on the eth0 interface

    # ifconfig eth0 0.0.0.0
    # ifconfig eth0 up

3. Configure 2 VLANs on the eth0 interface using vconfig as follows (100,200 are the VLAN id’s). If the 8021q.o module is not loaded, the vconfig command (when invoked first time) will automatically load the module.

    # vconfig add eth0 100
    # vconfig add eth0 200  

4. Configure IP on the VLAN interfaces

    # ifconfig eth0.100 xxx.xxx.xxx.xxx netmask 255.255.252.0 up
    # ifconfig eth0.200 yyy.yyy.yyy.yyy netmask 255.255.255.0 up

5. Preserve the vlan configuration across reboots by adding it to configuration files. Create the appropriate ifcfg files for eth0, eth0.100 and eth0.200 in /etc/sysconfig/network-scripts/

    # cd /etc/sysconfig/network-scripts/
     
    Contents of ifcfg-eth0
        DEVICE=eth0
        ONBOOT=no
        TYPE=Ethernet
   
    Contents of  ifcfg-eth0.100
        DEVICE=eth0.100
        IPADDR=xxx.xxx.xxx.xxx
        NETMASK=255.255.252.0
        VLAN=yes
        ONBOOT=yes
        BOOTPROTO=none
   
    Contents of ifcfg-eth0.200
        DEVICE=eth0.200
        IPADDR=yyy.yyy.yyy.yyy
        NETMASK=255.255.0.0
        VLAN=yes
        ONBOOT=yes
        BOOTPROTO=none
       
    Update /etc/sysconfig/network file to make the GATEWAYDEV use the public vlan interface.
   
    Contents of /etc/sysconfig/network
        NETWORKING=yes
        HOSTNAME=un1xf00
        GATEWAY=xxx.xxx.xxx.1
        DOMAINNAME=dev.un1xf00.com
        GATEWAYDEV=eth0.100
       
       
6. The VLAN configuration on the server can be verified in the file /proc/net/vlan/config. Sample contents are shown below.
   
        VLAN Dev name    | VLAN ID
        Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
        eth0.100       | 100  | eth0
        eth0.200       | 200  | eth0

Configuring Persistent static route in Linux


Static routes will be added usually through “route add” command. The drawback of ‘route’ command is that, when Linux reboots it will forget static routes. But to make it persistent across reboots, you have to add it to /etc/sysconfig/network-scripts/route- .

To add static route using “route add”:


# route add -net 192.168.100.0 netmask 255.255.255.0 gw 192.168.10.1 dev eth0

Adding Persistent static route:

You need to edit /etc/sysconfig/network-scripts/route-eth0 file to define static routes for eth0 interface.

GATEWAY0=192.168.10.1
NETMASK0=255.255.255.0
ADDRESS0=192.168.100.0

GATEWAY1=10.64.34.1
NETMASK1= 255.255.255.240
ADDRESS1=10.64.34.10

Save and close the file. Restart networking:

# service network restart

Verify new routing table:

# route –n

# netstat –nr

Wednesday 19 March 2014

Multipath Setup on Red Hat Linux

Linux SAN commands

Linux Boot Process

LVM Presentation

RAID-configure-iLO

How To Setup ISCSI On Linux(RedHat)



Definitions:
iSCSI initiator : the endpoint that initiates a iSCSI session.  An iSCSI initiator sends SCSI commands over an IP network. It’s the client endpoint.
iSCSI Target : refers to a storage resource located on an iSCSI server (most of the time it’s a “storage array”). It’s the server endpoint.
LUNs (Logical Number Units): number used to identify a logical unit, which is a device addressed by the SCSI protocol (thus Fiber Channel or iSCSI). It usually represents slices of large RAID disk arrays.
IQN (iSCSI Qualified Name) : iSCSI name of the target or initiator.
On the Storage Server:
Enable and configure the iSCSI Target on your storage server.
Mine is a QNAP Turbo NAS. I’ve got 1 target with 5 LUNs configured.
SCSI Portal
X Enable iSCSI Target Service
iSCSI Service Port:           3260
 mytarget (iqn.2004-04.com.qnap:ts-859:iscsi.mytarget.c5884b)     Connected
 id:0 - lun1 ( 2024.00 GB)               Enabled
 id:1 - lun2 ( 2024.00 GB)               Enabled
 id:2 - lun3 ( 2024.00 GB)               Enabled
 id:3 - lun4 ( 2024.00 GB)               Enabled
 id:4 - lun5 ( 1804.13 GB)               Enabled
I have two network interfaces:
1) for QNAP management, IP : 10.0.0.5
2)iSCSI access, directly connected to the server : 192.168.0.1
For more security you can enable “LUN masking”. It will restrain iSCSI target to be accessed to only by the initiator of your client (the client initiator name IQN can be found on /etc/iscsi/initiatorname.iscsi).
On the Linux client (see tips ** for VMware configuration) :
Install “iscsi-initiator-utils” on the server that will connect to the iSCSI volume:
# rpm –Uvh iscsi-initiator-utils-6.2.0.865-6.el5.x86_64.rpm
Set up iscsi automatic start on boot and start iscsi services:
# chkconfig iscsid on
# service iscsid start
# chkconfig iscsi on
# service iscsi start
Discover your iSCSI targets:
# iscsiadm -m discovery -t st -p 192.168.0.1
In my case it will show 2 targets (one for each network connection):
# iscsiadm -m node -T iqn.2004-04.com.qnap:ts-859:iscsi.mytarget.c5884b -p 192.168.0.1 -l
Add automatic login at boot :
# iscsiadm -m node -T iqn.2004-04.com.qnap:ts-859:iscsi.mytarget.c5884b -p 192.168.0.1 --op update -n node.startup -v automatic
As I have another access to the target, I will disable it in order to not disturb the previous configuration:
# iscsiadm -m node -T iqn.2004-04.com.qnap:ts-859:iscsi.mytarget.c5884b -p 10.0.0.5 --logout
# iscsiadm -m node -T iqn.2004-04.com.qnap:ts-859:iscsi.mytarget.c5884b -p 10.0.0.5 --op update -n node.startup -v manual

At this point you will see the iSCSI LUNs as block devices on your client.

On my system the five iSCSI block devices are /dev/sdc, sdd, sde, sdf and sdg.
You will have create partitions and format them to either standard Linux partition or LVM partition.
I chose LVM because I need large file systems.
You can use parted or fdisk (if < 2To), see article: ” How To Make Partitions Larger Than 2To With Parted GPT Support“.
So here is the result:
# fdisk -l
Disk /dev/sdc: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sdd: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sde: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sdf: 2173.2 GB, 2173253451776 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1      264216  2122314988+  8e  Linux LVM
Disk /dev/sdg: 1937.1 GB, 1937169711104 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdg1               1      235514  1891766173+  8e  Linux LVM

Then create you LVM volume group and logical volume :
# pvcreate /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
# vgcreate -s 256M vol_vg /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1
# lvcreate -l 28672 vol_vg -n vol_lv1
# lvcreate -l 10924 vol_vg -n vol_lv2

File system creation:
# mkfs -t ext3 -b 4096 -N 100000 /dev/vol_vg/vol_lv1 -L VOL1
# mkfs -t ext3 -b 4096 -N 100000 /dev/vol_vg/vol_lv2 -L VOL2

Then mount the file systems:
# mkdir –p /VOL1 /VOL2
# mount -t ext3 /dev/vol_vg/vol_lv1 /VOL1
# mount -t ext3 /dev/vol_vg/vol_lv2 /VOL2
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vol_vg-vol_lv1 7.0T  6.3T  395G  95% /VOL1
/dev/mapper/vol_vg-vol_lv2 2.7T  1.5T  1.2T  57% /VOL2

If you will want to automatically mount your iSCSI file systems at startup (see article: How To Use UUID And Blkid To Manage Devices).

Get the UUID for each file system:
# blkid /dev/vol_vg/vol_lv1
/dev/vol_vg/vol_lv1: LABEL="VOL1" UUID="4a496f92-6840-4736-a0d5-5b9916113835" SEC_TYPE="ext2" TYPE="ext3"
# blkid /dev/vol_vg/vol_lv2
/dev/vol_vg/vol_lv2: LABEL="VOL2" UUID="cab5e3ec-4797-4227-98e8-e9bca3c3f766" SEC_TYPE="ext2" TYPE="ext3"

The add UUIDs to /etc/fstab :
UUID=4a496f92-6840-4736-a0d5-5b9916113835       /VOL1   ext3 _netdev    0 0
UUID=cab5e3ec-4797-4227-98e8-e9bca3c3f766       /VOL2   ext3 _netdev    0 0


How To - Set up 802.1q VLAN Tagging


Setting up 802.1q VLAN Tagging

Implementation Step(s)

1. Ensure that the module is loaded by entering the following command:
lsmod | grep 8021
Tip
If the module is not loaded, load it with the following command:

modprobe 8021

1.Configure your physical interface in /etc/sysconfig/network-scripts/ifcfg-ethX, where X is a unique number corresponding to a

specific interface, as follows:

DEVICE=ethX
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes{

1.Configure the VLAN interface configuration in /etc/sysconfig/network-scripts. The configuration filename should be the physical interface plus a . character plus the VLAN ID number. For example, if the VLAN ID is 1921, and the physical interface is eth0, then the configuration filename should be ifcfg-eth0.1921:

DEVICE=ethX.1921
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.1.1
NETMASK=255.255.255.0
USERCTL=no
NETWORK=192.168.1.0
VLAN=yes{
1. If there is a need to configure a second VLAN, with for example, VLAN ID 193, on the same interface, eth0 , add a new file with the name eth0.193 with the VLAN configuration details.
2. Restart the networking service, in order for the changes to take effect, as follows:

service network restart

1.     Create a VLAN device on the bond0 interface for VLAN 1921. and VLAN 2688

# echo "alias bond0 bonding" >> /etc/modprobe.conf
#vi /etc/sysconfig/network-scripts/ifcfg-bond0
# "<NETWORK DESCRIPTION>"
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
MTU=1500
BONDING_OPTS="miimon=100 mode=1 use_carrier=1"
:wq

#vi /etc/sysconfig/network-scripts/ifcfg-bond0.1921
DEVICE=bond0.1921
IPADDR=10.117.244.7
NETMASK=255.255.255.224
ONBOOT=yes
BOOTPROTO=static
USERCTL=no
VLAN=yes

:wq

#vi /etc/sysconfig/network-scripts/ifcfg-bond0.2688
DEVICE=bond0.2688
IPADDR=10.117.116.185
NETMASK=255.255.255.224
ONBOOT=yes
BOOTPROTO=static
USERCTL=no
VLAN=yes
:wq

#ifconfig bond0 up
#ifconfig bond0.1921 up
#ifconfig bond0.2688 up

NIC Bonding for RHEL


Description
RHEL allows admins to bind multiple network interfaces together into a single channel using the bonding kernel module and special network interface called a channel bonding interface. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. The bonded interfaces are depends upon bonding interface parameter i.e. mode. It sets a round-robin policy for fault tolerance and load balancing.
Sample Diagram for NIC Bonding:
Untitled
Step1: Here, we would like to configure the channel bonding i.e. bond0 with interfaces eth1 and eth2
            Requirements:
                    2 NIC cards
                    1 Static IP and netmask value

Step2: Kernel module Configuration:
To make channel bonding interface (i.e. bond0) to be valid, the kernel module must be loaded. To ensure that module is loaded when the channel bonding interface is brought up, create a new file as a root named bonding.conf file in the /etc/modeprobe.d/ directory.
            # echo "alias bond0 bonding">/etc/modprobe.d/bonding.conf
            # cat /etc/modprobe.d/bonding.conf
               alias bond0 bonding
            NOTE: We can name this file anything as we like, but it should be end with .conf extention
Step3Creating Channel Bonding Interface:
To create channel bonding interface, we need to create a bonding configuration file on/etc/sysconfig/network-scripts/ directory called ifcfg-bondN, where N is the number for the interface, such as “0”

            # cat /etc/sysconfig/network-scripts/ifcfg-bond0
               DEVICE=bond0
               IPADDR=10.32.112.228
               NETMASK=255.255.254.0
               BOOTPROTO=none
               ONBOOT=yes
             
  USERCTL=no
               BONDING_OPTS="mode=0 miimon=1000"
             For DHCP Device
             # cat /etc/sysconfig/network-scripts/ifcfg-bond0
                 DEVICE=bond0
                 BOOTPROTO=dhcp
                 ONBOOT=yes
             
    USERCTL=no
                 BONDING_OPTS="mode=0 miimon=1000"
NOTE: Here, mode=0 indicates that it sets a round robin policy for fault tollerence and load balancing
Miimon specifies (in milliseconds) how often MII link monitoring occurs. It always verifies that the NIC is active or not. To verify that the driver for particular NIC supports the MII tool, we have to use below command
     # ethtool eth0|grep "link detected:"
         Link detected: yes
Step4: Configuring Network Interfaces
Once we created the channel bonding interface i.e. bond0, the network interfaces to be bound together must be configured by adding the MASTER and SLAVE directives to their configuration files. The configuration files for each of the channel bonded interfaces can be nearly identical.
       # cat /etc/sysconfig/network-scripts/ifcfg-eth1
       # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]
            DEVICE=eth1
       BOOTPROTO=none
       ONBOOT=yes
       HWADDR=00:0c:29:03:23:17
       MASTER=bond0
       SLAVE=yes
       USERCTL=no
       # cat /etc/sysconfig/network-scripts/ifcfg-eth2
        # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]
       DEVICE=eth2
       BOOTPROTO=none
       ONBOOT=yes
       HWADDR=00:0c:29:03:23:21
       MASTER=bond0
       SLAVE=yes
       USERCTL=no
Step5Enabling Bonding Interface
            # ifconfig bond0 up  ==>To enable and fetch the ipaddres from configuration file
                        (or)
            # ifconfig bond0 10.32.112.228 netmask 255.255.254.0 up
           To check existing bonds:
            # cat /sys/class/net/bonding_masters
               bond0
            To view an existing mode:
            # cat /sys/class/net/bond0/bonding/mode
               balance-rr 0
            To check an existing miimon:
            # cat /sys/class/net/bond0/bonding/miimon
               1000
            To verify bonding:
            # ifconfig -a
            bond0     Link encap:Ethernet  HWaddr 00:0C:29:03:23:17
             inet addr:10.32.112.228  Bcast:10.32.113.255  Mask:255.255.254.0
             inet6 addr: fe80::20c:29ff:fe03:2317/64 Scope:Link
             UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
             RX packets:46470 errors:0 dropped:0 overruns:0 frame:0
             TX packets:380 errors:0 dropped:0 overruns:0 carrier:0
             collisions:0 txqueuelen:0
             RX bytes:5120536 (4.8 MiB)  TX bytes:48723 (47.5 KiB)
          eth0      Link encap:Ethernet  HWaddr 00:0C:29:03:23:0D
             inet addr:10.32.113.13  Bcast:10.32.113.255  Mask:255.255.254.0
             inet6 addr: fe80::20c:29ff:fe03:230d/64 Scope:Link
             UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
             RX packets:26938 errors:0 dropped:0 overruns:0 frame:0
             TX packets:3154 errors:0 dropped:0 overruns:0 carrier:0
             collisions:0 txqueuelen:1000
             RX bytes:2939471 (2.8 MiB)  TX bytes:435014 (424.8 KiB)
             Interrupt:67 Base address:0×2000
         eth1      Link encap:Ethernet  HWaddr 00:0C:29:03:23:17
             UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
             RX packets:22015 errors:0 dropped:0 overruns:0 frame:0
             TX packets:138 errors:0 dropped:0 overruns:0 carrier:0
             collisions:0 txqueuelen:1000
             RX bytes:2395814 (2.2 MiB)  TX bytes:20245 (19.7 KiB)
             Interrupt:67 Base address:0×2080
         eth2      Link encap:Ethernet  HWaddr 00:0C:29:03:23:17
             UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
             RX packets:24455 errors:0 dropped:0 overruns:0 frame:0
             TX packets:242 errors:0 dropped:0 overruns:0 carrier:0
             collisions:0 txqueuelen:1000
             RX bytes:2724722 (2.5 MiB)  TX bytes:28478 (27.8 KiB)
             Interrupt:75 Base address:0×2400
         lo        Link encap:Local Loopback
             inet addr:127.0.0.1  Mask:255.0.0.0
             inet6 addr: ::1/128 Scope:Host
             UP LOOPBACK RUNNING  MTU:16436  Metric:1
             RX packets:3454 errors:0 dropped:0 overruns:0 frame:0
             TX packets:3454 errors:0 dropped:0 overruns:0 carrier:0
             collisions:0 txqueuelen:0
             RX bytes:4147201 (3.9 MiB)  TX bytes:4147201 (3.9 MiB) 
            # ip addr show
               1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
            link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
            inet 127.0.0.1/8 scope host lo
            inet6 ::1/128 scope host
                  valid_lft forever preferred_lft forever
         2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast                   qlen 1000
            link/ether 00:0c:29:03:23:0d brd ff:ff:ff:ff:ff:ff
            inet 10.32.113.13/23 brd 10.32.113.255 scope global eth0
            inet6 fe80::20c:29ff:fe03:230d/64 scope link
                  valid_lft forever preferred_lft forever
         3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc                        pfifo_fast master bond0 qlen 1000
            link/ether 00:0c:29:03:23:17 brd ff:ff:ff:ff:ff:ff
         4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc                        pfifo_fast master bond0 qlen 1000
            link/ether 00:0c:29:03:23:17 brd ff:ff:ff:ff:ff:ff
         5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc                      noqueue
            link/ether 00:0c:29:03:23:17 brd ff:ff:ff:ff:ff:ff
            inet 10.32.112.228/23 brd 10.32.113.255 scope global bond0
            inet6 fe80::20c:29ff:fe03:2317/64 scope link tentative
                  valid_lft forever preferred_lft forever
            # cat /proc/net/bonding/bond0
            Ethernet Channel Bonding Driver: v3.4.0-2 (October 7, 2008)
         Bonding Mode: load balancing (round-robin)
         MII Status: up
         MII Polling Interval (ms): 1000
         Up Delay (ms): 0
         Down Delay (ms): 0
         Slave Interface: eth1
         MII Status: up
         Speed: Unknown
         Duplex: Unknown
         Link Failure Count: 0
         Permanent HW addr: 00:0c:29:03:23:17
         Slave Interface: eth2
           MII Status: up
           Speed: Unknown
           Duplex: Unknown
           Link Failure Count: 0
           Permanent HW addr: 00:0c:29:03:23:21
NOTE: To change an existing mode and miimon values (optional)
            # ifconfig bond0 down
            # echo 100 > /sys/class/net/bond0/bonding/miimon
            # echo 6 > /sys/class/net/bond0/bonding/mode
            # ifconfig bond0 up
Bonding modes:
balance-rr or 0: Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available.
active-backup or 1: Sets an active-backup policy for fault tolerance. Transmissions are received and sent out via the first available bonded slave interface. Another bonded slave interface is only used if the active bonded slave interface fails.
balance-xor or 2: Sets an XOR (exclusive-or) policy for fault tolerance and load balancing. Using this method, the interface matches up the incoming request's MAC address with the MAC address for one of the slave NICs. Once this link is established, transmissions are sent out sequentially beginning with the first available interface
broadcast or 3: Sets a broadcast policy for fault tolerance. All transmissions are sent on all slave interfaces.
802.3ad or 4: Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all slaves in the active aggregator. Requires a switch that is 802.3ad compliant
balance-tlb or 5: Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave
balance-alb or 6: Sets an Active Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for IPV4 traffic. Receive load balancing is achieved through ARP