Wednesday, 19 March 2014

Dynamically adding SAN storage in Linux for use with multipath I/O

Dynamically adding SAN storage in Linux for use with multipath I/O

Dynamically adding storage for use with multipath I/O

This document (3000817) is provided subject to the disclaimer at the
end of this document.
Environment
SUSE Linux Enterprise Server 9

Please note, this document is obsolete for newer SLES versions.
For SLES10/11 please check the multipathing chapter in the Storage
Admin Guide instead.


SAN suited for multiple paths to storage

The Linux installation has already been configured to use
Device-Mapper Multipath I/O (DM-MPIO) per the article "How to
setup /
use multipathing on SLES".

The Linux driver for the host base adapters (HBAs) on the system
supports rescanning the HBA to detect the addition or deletion of
storage areas ("SAN disks"). For example, a QLogic or Emulex
HBA is
being used.

The system has been booted with kernel parameters for SCSI device
scanning that are appropriate for the SCSI device numbering plan
being
followed in the SAN and for the SCSI levels reported by the HBAs as
discussed in TID 3955167 - Troubleshooting SCSI (LUN) scanning
issues.
Situation
Purpose
Adding SAN storage to the Linux system and setting up multiple I/O
paths to it without needing to reboot the system.
Resolution
Follow these steps to allocate new storage on the SAN, make it
visible
to the Linux system and use it to create new filesystems or to expand
existing filesystems residing on logical volumes:

   1. Allocate the storage on the SAN and update its access control
settings to allow the Linux system access to the new storage. How
this
is done in detail is highly dependent on the components of the SAN
and
its architecture; refer to the SAN vendor's documentation for
details.

   2. On the Linux system, instruct the HBA driver to rescan the SAN
to discover the new storage area ("LUN"). The exact commands
depend on
the kernel version and driver.

      With SLES9, the Qlogic and Emulex drivers require different
commands for rescanning. For example, for a QLogic 2300 HBA, the
command is
      echo scsi-qlascan > /proc/scsi/qla2xxx/
      whereas for an Emulex HBA, it is
      echo 1 > /sys/class/scsi_host/hostX/issue_lip
      (replace X with the appropriate Number of your HBA).           

                                                                     

                                                                     

                                       
   3. If the HBA driver supports it, check that the new disk is seen
by the HBA driver correctly (at this point, the newly added LUN will
not yet be known to the higher layers of the Linux kernel's SCSI
subsystem and will not yet be usable).


      With SLES9 for a QLogic 2300 HBA, run
      less /proc/scsi/qla2xxx/
      and check the "SCSI LUN information" section. The
entries for
the newly added disks are indicated by a "*" after the
flags.
      Unfortunately, there is no equivalent for Emulex cards.

   4. Make the new device known to the middle layer of the Linux
kernel's SCSI subsystem. This can be done in two ways.
          * Through the
            rescan-scsi-bus.sh
            command which rescans the SCSI subsystem for changes.
Depending on the SCSI device numbering plan in use, arguments may
need
to be used to enlarge the scope of the search or to control it more
carefully, e.g through
            rescan-scsi-bus.sh -l -w -c
            which will search LUNs other than 0, scan for more device
IDs and for more channels, or through
            rescan-scsi-bus.sh --hosts="hostlist"
--channels="channellist" --ids="idlist"
--luns="lunlist"
          * Alternatively, one can
            echo "scsi add-single-device 0 1 2 3" >
/proc/scsi/scsi
            to add the storage with host ID 0, channel ID 1, target
ID
2, LUN ID 3, or, with kernel 2.6.5-7.257 or newer,
            echo "- - -" >
/sys/class/scsi_host/hostX/scan
                (replace X with the appropriate Number of your HBA) 
to rescan all targets on a host.

   5. Run
      multipath
      to have the new devices be picked up for DM-MPIO.
   6. Examine /dev/disk/by-id to identify the persistent device names
for the new storage area(s) and use the persistent device names for
further operations (including filesystem creation and addition to
/etc/fstab).
   7. If LVM is to be used for the new storage area(s), configure LVM
at this point, either through YaST or using the LVM command line
tools, starting with pvcreate.

Additional Information
Alternative: Expand an existing storage area

The procedure in this document covers how to add and use new storage
areas. In some situations, it can be preferable to expand an existing
storage area (LUN) and resize the filesystem(s) residing on it. For
details on that procedure, refer to TID 3129645 - How to Expand a
Multipath-Enabled LUN.

Caveat regarding device names

The /dev/sd* device entry for a storage area may change on a reboot
(e.g. when other storage areas are allocated or removed). This is not
problematic for an MPIO setup, but may cause problems when applying
steps from this document to a non-multipath setup. In a non-multipath
setup, the device names in /dev/disk/by-id should be used instead of
the /dev/sd* names as the/dev/disk/by-id names persist over reboots.

Adding additional storage to a mainframe LPAR

A similar procedure to add additional storage (DASDs) to an LPAR on
an
IBM zSeries mainframe is documented in z/VM and Linux on IBM System z
The Virtualization Cookbook - A cookbook for installing and
customizing z/VM 5.2 and Linux SLES 10 on the mainframe.

No comments:

Post a Comment