Thursday, 7 May 2015
Activating VG when quorum lost in Linux
Normally, volume groups are automatically activated during system startup. Unless you intentionally deactivate a volume group using vgchange, you will probably not need to reactivate a volume group.
However, LVM does require that a "quorum" of disks in a volume group be available. During normal system operation, LVM needs a quorum of more than half of the disks in a volume group for activation. If, during run time, a disk fails and causes quorum to be lost, LVM alerts you with a message to the console, but keeps the volume group active.
If there is no other way to make a quorum available, the -q option to the vgchange command will override the quorum check.
EXAMPLE:
vgchange -a y -q n /dev/vg01
When you replace a disk that was not online when you originally activated the volume group, use the activation command again to attach the now accessible disks to the volume group.
EXAMPLE:
vgchange -a y /dev/vg01
Moving Volume group to another Server in Linux
Moving a VG to another server:
To do this we use the vgexport and vgimport commands.
vgexport and vgimport is not necessary to move disk drives from one server to another. It is an administrative policy tool to prevent access to volumes in the time it takes to move them.
1. Unmount the file system
First, make sure that no users are accessing files on the active volume, then unmount it
# unmount /appdata
2.Mark the volume group inactive
Marking the volume group inactive removes it from the kernel and prevents any further activity on it.
# vgchange -an appvg
vgchange -- volume group "appvg" successfully deactivate
3. Export the volume group
It is now must to export the volume group. This prevents it from being accessed on the old server and prepares it to be removed.
# vgexport appvg
vgexport -- volume group "appvg" successfully exported
Now, When the machine is next shut down, the disk can be unplugged and then connected to it's new machine
4. Import the volume group
When it plugged into the new server, it becomes /dev/sdc (depends).
so an initial pvscan shows:
# pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/sdc1" is in EXPORTED VG "appvg" [996 MB / 996 MB free]
pvscan -- inactive PV "/dev/sdc2" is in EXPORTED VG "appvg" [996 MB / 244 MB free]
pvscan -- total: 2 [1.95 GB] / in use: 2 [1.95 GB] / in no VG: 0 [0]
We can now import the volume group (which also activates it) and mount the file system.
If you are importing on an LVM 2 system, run:
# vgimport appvg
Volume group "vg" successfully imported
5. Activate the volume group
You must activate the volume group before you can access it.
# vgchange -ay appvg
Mount the file system
# mkdir -p /appdata
# mount /dev/appvg/appdata /appdata
The file system is now available for use.
CLEANING UP THE LINUX BUFFER CACHE
When you write data, it doesn’t necessarily get written to disk right then. The kernel maintains caches of many things, and disk data is something where a lot of work is done to keep everything fast and efficient.
That’s great for performance, but sometimes you want to know that data really has gotten to the disk drive. This could be because you want to test the performance of the drive, but could also be when you suspect a drive is malfunctioning: if you just write and read back, you’ll be reading from cache, not from actual disk platters.
Obviously the first thing you need to do is get the data in the cache sent on its way to the disk. That’s “sync”, which tells the kernel that you want the data written. But that doesn’t mean that a subsequent read comes from disk: if the requested data is still in cache, that’s where it will be fetched from. It also doesn’t necessarily mean that the kernel actually has sent the data along to the disk controller: a “sync” command is a request, not a command that says “stop everything else you are doing and write your whole buffer cache to disk right now!”. No, “sync” just means that the cache will be written, as and when the kernel has time to do so.
Note that you really didn’t even need the “sync” if this is what you are doing: the overwrite forces the sync itself.
Modern Linux kernels make this a bit easier: in /proc/sys/vm/ you’ll find “drop_caches”.
You can simply echo a number to that to free caches.
To free pagecache:
echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
echo 3 > /proc/sys/vm/drop_caches
Linux SAN Multi path for HP EVA Storage
Instead of installing the original device-mapper-multipath package there is a simillar package from HP called HPDMmultipath-[version].tar.gz that has already a configuration for HP EVA and XP storage devices. The HPDMmultipath-[version].tar.gz can be downloaded from www.hp.com
# tar -zxvf HPDMmultipath-3.0.0.tar.gz # cd HPDMmultipath-3.0.0/RPMS # rpm -ivh HPDMmultipath-tools[version]-[Linux-Version]-[ARCH].rpm # vim /etc/multipath.conf defaults { udev_dir /dev polling_interval 10 selector "round-robin 0" path_grouping_policy failover getuid_callout "/sbin/scsi_id -g -u -s /block/%n" prio_callout "/bin/true" path_checker tur rr_min_io 100 rr_weight uniform failback immediate no_path_retry 12 user_friendly_names yes } devnode_blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" devnode "^cciss!c[0-9]d[0-9]*" } devices { device { vendor "HP|COMPAQ" product "HSV1[01]1 \(C\)COMPAQ|HSV2[01]0" path_grouping_policy group_by_prio getuid_callout "/sbin/scsi_id -g -u -s /block/%n" path_checker tur path_selector "round-robin 0" prio_callout "/sbin/mpath_prio_alua /dev/%n" rr_weight uniform failback immediate hardware_handler "0" no_path_retry 12 } device { vendor "HP" product "OPEN-.*" path_grouping_policy multibus getuid_callout "/sbin/scsi_id -g -u -s /block/%n" path_selector "round-robin 0" rr_weight uniform prio_callout "/bin/true" path_checker tur hardware_handler "0" failback immediate no_path_retry 12 } }
Show paths to an EVA8000 storage array.
# multipath -ll mpath0 (3600508b4001054a20001100001c70000) [size=4.0G][features="1 queue_if_no_path"][hwhandler="0"] \_ round-robin 0 [prio=200][active] \_ 0:0:1:1 sdd 8:48 [active][ready] \_ 0:0:3:1 sdj 8:144 [active][ready] \_ 1:0:2:1 sds 65:32 [active][ready] \_ 1:0:3:1 sdv 65:80 [active][ready] \_ round-robin 0 [prio=40][enabled] \_ 0:0:0:1 sda 8:0 [active][ready] \_ 0:0:2:1 sdg 8:96 [active][ready] \_ 1:0:0:1 sdm 8:192 [active][ready] \_ 1:0:1:1 sdp 8:240 [active][ready]
ProLiant Support Pack (PSP)
While installing PSP from HP, unckeck the HBA failover driver in the installation screen, otherwise a new kernel will be installed and the above installed multipathing driver isn’t working correctly anomymore.
While installing PSP from HP, unckeck the HBA failover driver in the installation screen, otherwise a new kernel will be installed and the above installed multipathing driver isn’t working correctly anomymore.
Clear Swap Space in Linux
There have been times where it has been necessary for UNIX Admins to clear out the swap space on a Linux system. In order to do this, you must first make sure that you have enough free memory to hold what is being used by swap.
First we want to see what is currently being used.
Then I run the actual commands that empty the swap:
Then I check what is being used after doing this.
First we want to see what is currently being used.
free
Then I run the actual commands that empty the swap:
swapoff -a and then swapon -a
Then I check what is being used after doing this.
free
Remove LUNS From Linux Safely
To remove a LUN and all associated PowerPath and Linux devices from the host environment follow these steps.
Note: that it is critical to follow the procedures in exact order because deviating from the procedures can cause the host to panic
These steps were put together using steps from QLogic KB articles.
1. Stop any I/O to the device being removed. [Unmount the filesystem ] It is critical to stop all I/O on the device that is being removed.
2. Run the following command to determine which native SCSI devices are mapped to the pseudo device:
powermt display dev=all
Find the name of the LUN to be removed and match it up with the emcpower device name. This will need to be done on each server individually.
3. Run the command:
powermt remove dev=emcpower
Where corresponds to the LUN to be removed.
4. Run the command:
powermt release
Failing to run this command results in the pseudo device still being visible in /dev and /sys/block and may lead to some complications when the new devices are dynamically added.
5. In the /tmp/ql_utils directory, there should be a script to rescan the qlogic hbas. The script is called ql-dynamic-tgt-lun-disc.sh. Run the script:
/tmp/ql_utils/ql-dynamic-tgt-lun-disc.sh
6. You will now remove the device from the storage array using your array admin utilities. On the Linux server, run powermt display to verify that the device has been removed.
7. Now remove the LUN from the Storage Group (CLARiiON) or unmap it from the FA ports (DMX).
Note: that it is critical to follow the procedures in exact order because deviating from the procedures can cause the host to panic
These steps were put together using steps from QLogic KB articles.
1. Stop any I/O to the device being removed. [Unmount the filesystem ] It is critical to stop all I/O on the device that is being removed.
2. Run the following command to determine which native SCSI devices are mapped to the pseudo device:
powermt display dev=all
Find the name of the LUN to be removed and match it up with the emcpower device name. This will need to be done on each server individually.
3. Run the command:
powermt remove dev=emcpower
Where corresponds to the LUN to be removed.
4. Run the command:
powermt release
Failing to run this command results in the pseudo device still being visible in /dev and /sys/block and may lead to some complications when the new devices are dynamically added.
5. In the /tmp/ql_utils directory, there should be a script to rescan the qlogic hbas. The script is called ql-dynamic-tgt-lun-disc.sh. Run the script:
/tmp/ql_utils/ql-dynamic-tgt-lun-disc.sh
6. You will now remove the device from the storage array using your array admin utilities. On the Linux server, run powermt display to verify that the device has been removed.
7. Now remove the LUN from the Storage Group (CLARiiON) or unmap it from the FA ports (DMX).
Linux SAN Multipathing using device mapper
There are a lot of SAN multipathing solutions on Linux at the moment. Two of them are discussesed in this blog. The first one is device mapper multipathing that is a failover and load balancing solution with a lot of configuration options. The second one (mdadm multipathing) is just a failover solution with manuel re-anable of a failed path. The advantage of mdadm multiphating is that it is very easy to configure.
Before using a multipathing solution for a production environment on Linux it is also important to determine if the used solution is supportet with the used Hardware. For example HP doesn’t support the Device Mapper Multipathing solution on their servers yet.
Device Mapper Multipathing
Procedure for configuring the system with DM-Multipath:
Install device-mapper-multipath rpm
Edit the multipath.conf configuration file:
comment out the default blacklist
change any of the existing defaults as needed
Start the multipath daemons
Create the multipath device with the multipath
Install Device Mapper Multipath
# rpm -ivh device-mapper-multipath-0.4.7-8.el5.i386.rpm
warning: device-mapper-multipath-0.4.7-8.el5.i386.rpm: Header V3 DSA signature:
Preparing... ########################################### [100%]
1:device-mapper-multipath########################################### [100%]
Initial Configuration
Set user_friendly_name. The devices will be created as /dev/mapper/mpath[n]. Uncomment the blacklist.
# vim /etc/multipath.conf
#blacklist {
# devnode "*"
#}
defaults {
user_friendly_names yes
path_grouping_policy multibus
}
Load the needed modul and the startup service.
# modprobe dm-multipath
# /etc/init.d/multipathd start
# chkconfig multipathd on
Print out the multipathed device.
# multipath -v2
or
# multipath -v3
Configuration
Configure device type in config file.
# cat /sys/block/sda/device/vendor
HP
# cat /sys/block/sda/device/model
HSV200
# vim /etc/multipath.conf
devices {
device {
vendor "HP"
product "HSV200"
path_grouping_policy multibus
no_path_retry "5"
}
}
Configure multipath device in config file.
# cat /var/lib/multipath/bindings
# Format:
# alias wwid
#
mpath0 3600508b400070aac0000900000080000
# vim /etc/multipath.conf
multipaths {
multipath {
wwid 3600508b400070aac0000900000080000
alias mpath0
path_grouping_policy multibus
path_checker readsector0
path_selector "round-robin 0"
failback "5"
rr_weight priorities
no_path_retry "5"
}
}
Set not mutipathed devices on the blacklist. (f.e. local Raid-Devices, Volume Groups)
# vim /etc/multipath.conf
devnode_blacklist {
devnode "^cciss!c[0-9]d[0-9]*"
devnode "^vg*"
}
Show Configured Multipaths.
# dmsetup ls --target=multipath
mpath0 (253, 1)
# multipath -ll
mpath0 (3600508b400070aac0000900000080000) dm-1 HP,HSV200
[size=10G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=4][active]
\_ 0:0:0:1 sda 8:0 [active][ready]
\_ 0:0:1:1 sdb 8:16 [active][ready]
\_ 1:0:0:1 sdc 8:32 [active][ready]
\_ 1:0:1:1 sdd 8:48 [active][ready]
Format and mount Device
Fdisk cannot be used with /dev/mapper/[dev_name] devices. Use fdisk on the underlying disks and execute the following command when device-mapper multipath maps the device to create a /dev/mapper/mpath[n] device for the partition.
# fdisk /dev/sda
# kpartx -a /dev/mapper/mpath0
# ls /dev/mapper/*
mpath0 mpath0p1
# mkfs.ext3 /dev/mapper/mpath0p1
# mount /dev/mapper/mpath0p1 /mnt/san
After that /dev/mapper/mpath0p1 is the first partition on the multipathed device.
Multipathing with mdadm on Linux
The md multipathing solution is only a failover solution what means that only one path is used at one time and no load balancing is made.
Start the MD Multipathing Service
# chkconfig mdmpd on
# /etc/init.d/mdmpd start
On the first Node (if it is a shared device)
Make Label on Disk
# fdisk /dev/sda
Disk /dev/sdt: 42.9 GB, 42949672960 bytes
64 heads, 32 sectors/track, 40960 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdt1 1 40960 41943024 fd Linux raid autodetect
# partprobe
Bind multiple paths together
# mdadm --create /dev/md4 --level=multipath --raid-devices=4 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1
Get UUID
# mdadm --detail /dev/md4
UUID : b13031b5:64c5868f:1e68b273:cb36724e
Set md configuration in config file
# vim /etc/mdadm.conf
# Multiple Paths to RAC SAN
DEVICE /dev/sd[qrst]1
ARRAY /dev/md4 uuid=b13031b5:64c5868f:1e68b273:cb36724e
# cat /proc/mdstat
On the second Node (Copy the /etc/mdadm.conf from the first node)
# mdadm -As
# cat /proc/mdstat
Restore a failed path
# mdadm /dev/md1 -f /dev/sdt1 -r /dev/sdt1 -a /dev/sdt1
9 Linux ethtool Examples to Manipulate Ethernet Card (NIC Card)
1. List Ethernet Device Properties
When you execute ethtool command with a device name, it displays the following information about the ethernet device.
# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Link detected: yes
This above ethtool output displays ethernet card properties such as speed, wake on, duplex and the link detection status. Following are the three types of duplexes available.
Full duplex : Enables sending and receiving of packets at the same time. This mode is used when the ethernet device is connected to a switch.
2. Change NIC Parameter Using ethtool Option -s autoneg
The above ethtool eth0 output displays that the “Auto-negotiation” parameter is in enabled state. You can disable this using autoneg option in the ethtool as shown below.
# ifdown eth0
eth0 device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
eth0 configuration: eth-bus-pci-0000:0b:00.0
# ethtool -s eth0 autoneg off
# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: Not reported
Advertised auto-negotiation: No
Speed: Unknown! (65535)
Duplex: Unknown! (255)
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: g
Wake-on: g
Link detected: no
# ifup eth0
After the above change, you could see that the “link detection” value changed to down and auto-negotiation is in off state.
3. Change the Speed of Ethernet Device
Using ethtool you can change the speed of the ethernet device to work with the certain network devices, and the newly assign speed value should be within the limited capacity.
# ethtool -s eth0 speed 100 autoneg off
# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: Not reported
Advertised auto-negotiation: No
Speed: Unknown! (65535)
Duplex: Unknown! (255)
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: g
Wake-on: g
Link detected: no
Once you change the speed when the adapter is online, it automatically goes offline, and you need to bring it back online using ifup command.
# ifup eth0
eth0 device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
eth0 configuration: eth-bus-pci-0000:0b:00.0
Checking for network time protocol daemon (NTPD): running
# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: Not reported
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: g
Wake-on: g
Link detected: yes
As shown in the above output, the speed changed from 1000Mb/s to 100Mb/s and auto-negotiation parameter is unset.
To change the Maximum Transmission Unit (MTU), refer to our ifconfig examples article.
4. Display Ethernet Driver Settings
ethtool -i option displays driver version, firmware version and bus details as shown below.
# ethtool -i eth0
driver: bnx2
version: 2.0.1-suse
firmware-version: 1.9.3
bus-info: 0000:04:00.0
5. Display Auto-negotiation, RX and TX of eth0
View the autonegotiation details about the specific ethernet device as shown below.
# ethtool -a eth0
Pause parameters for eth0:
Autonegotiate: on
RX: on
TX: on
6. Display Network Statistics of Specific Ethernet Device
Use ethtool -S option to display the bytes transfered, received, errors, etc, as shown below.
# ethtool -S eth0
NIC statistics:
rx_bytes: 74356477841
rx_error_bytes: 0
tx_bytes: 110725861146
tx_error_bytes: 0
rx_ucast_packets: 104169941
rx_mcast_packets: 138831
rx_bcast_packets: 59543904
tx_ucast_packets: 118118510
tx_mcast_packets: 10137453
tx_bcast_packets: 2221841
tx_mac_errors: 0
tx_carrier_errors: 0
rx_crc_errors: 0
rx_align_errors: 0
tx_single_collisions: 0
tx_multi_collisions: 0
tx_deferred: 0
tx_excess_collisions: 0
tx_late_collisions: 0
tx_total_collisions: 0
rx_fragments: 0
rx_jabbers: 0
rx_undersize_packets: 0
rx_oversize_packets: 0
rx_64_byte_packets: 61154057
rx_65_to_127_byte_packets: 55038726
rx_128_to_255_byte_packets: 426962
rx_256_to_511_byte_packets: 3573763
rx_512_to_1023_byte_packets: 893173
rx_1024_to_1522_byte_packets: 42765995
rx_1523_to_9022_byte_packets: 0
tx_64_byte_packets: 3633165
tx_65_to_127_byte_packets: 51169838
tx_128_to_255_byte_packets: 3812067
tx_256_to_511_byte_packets: 113766
tx_512_to_1023_byte_packets: 104081
tx_1024_to_1522_byte_packets: 71644887
tx_1523_to_9022_byte_packets: 0
rx_xon_frames: 0
rx_xoff_frames: 0
tx_xon_frames: 0
tx_xoff_frames: 0
rx_mac_ctrl_frames: 0
rx_filtered_packets: 14596600
rx_discards: 0
rx_fw_discards: 0
7. Troubleshoot the Ethernet Connection Issues
When there is a problem with the network connection, you might want to check (or change) the ethernet device parameters explained in the above examples, when you see following issues in the output of ethtool command.
Speed and Duplex value is shown as Unknown
Link detection value is shown as No
Upon successful connection, the three parameters mentioned above gets appropriate values. i.e Speed is assigned with known value, Duplex become either Full/Half, and the Link detection becomes Yes.
After the above changes, if the Link Detection still says “No”, check whether there are any issues in the cables that runs from the switch and the system, you might want to dig into that aspect further.
To capture and analyze packets from a specific network interface, use tcpdump utility.
8. Identify Specific Device From Multiple Devices (Blink LED Port of NIC Card)
# ethtool -p eth0
9. Make Changes Permanent After Reboot
If you’ve changed any ethernet device parameters using the ethtool, it will all disappear after the next reboot, unless you do the following.
On ubuntu, you have to modify /etc/network/interfaces file and add all your changes as shown below.
# vim /etc/network/interfaces
post-up ethtool -s eth2 speed 1000 duplex full autoneg off
The above line should be the last line of the file. This will change speed, duplex and autoneg of eth2 device permanently.
On SUSE, modify the /etc/sysconfig/network/ifcfg-eth-id file and include a new script using POST_UP_SCRIPT variable as shown below. Include the below line as the last line in the corresponding eth1 adpater config file.
# cd /etc/sysconfig/network/
# vim scripts/eth1
#!/bin/bash
/sbin/ethtool -s duplex full speed 100 autoneg off
How to scan new FC LUNS and SCSI disks in Linux ?
How to scan new FC LUNS and SCSI disks in Redhat Linux without rebooting the server? Most of the Linux beginners have wondering how to do this and this article will be for them.It may look like very simple as we perform this in daily operation to scan luns but system has many work to do in background when you execute storage scanning commands. Redhat says this type of scan can be distributive,since it can cause delays while I/O operation timeout and remove devices unexpectedly from OS.So perform this scan when really you want to scan the disks and LUNS.
Scanning FC-LUN’s in Redhat Linux
1.First find out how many disks are visible in “fdisk -l” .
# fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l
2.Find out how many host bus adapter configured in the Linux box.you can use “systool -fc_host -v” to verify available FC in the system.
# ls /sys/class/fc_host
host0 host1
In this case,you need to scan host0 & host1 HBA.
3.If the system virtual memory is too low ,then do not proceed further.If you have enough free virtual memory,then you can proceed with below command to scan new LUNS.
# echo "1" > /sys/class/fc_host/host0/issue_lip
# echo "1" > /sys/class/fc_host/host1/issue_lip
Note: You need to monitor the “issue_lip” in /var/log/messages to determine when the scan will complete.This operation is an asynchronous operation.
4.Verify if the new LUN is visible or not by counting the available disks.
# fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l
If any new LUNS added ,then you can see more count is more then before scanning the LUNS.
Scanning SCSI DISKS in Redhat Linux
1.Finding the existing disk from fdisk.
[root@mylinz1 ~]# fdisk -l |egrep '^Disk' |egrep -v 'dm-'
Disk /dev/sda: 21.5 GB, 21474836480 bytes
2.Find out how many SCSI controller configured.
[root@mylinz1 ~]# ls /sys/class/scsi_host/host
host0 host1 host2
In this case,you need to scan host0,host1 & host2.
3.Scan the SCSI disks using below command.
[root@mylinz1 ~]# echo "- - -" > /sys/class/scsi_host/host0/scan
[root@mylinz1 ~]# echo "- - -" > /sys/class/scsi_host/host1/scan
[root@mylinz1 ~]# echo "- - -" > /sys/class/scsi_host/host2/scan
4.Verify if the new disks are visible or not.
[root@mylinz1 ~]# fdisk -l |egrep '^Disk' |egrep -v 'dm-'
Disk /dev/sda: 21.5 GB, 21474836480 bytes
Disk /dev/sdb: 1073 MB, 1073741824 bytes
Disk /dev/sdc: 1073 MB, 1073741824 bytes
From Redhat Linux 5.4 onwards, redhat introduced “/usr/bin/rescan-scsi-bus.sh” script to scan all the SCSI bus and update the SCSI layer to reflect new devices.
But most of the time,script will not be able to scan new disks and you need go with echo command
How to Detect new hard disk attached without bebooting in linux
Scenario:
Suppose you have added a new hard disk to your Linux OS running on any Virtual Environment which in my case is VMware workstation. Once added the new hard drive, generally the changes won't reflect unless you reboot the Guest OS.
But what if you are in no position to reboot the Gues Linux OS?
Solution:
In the below path you can find a list of host symlinks pointing to the iscsi device configured on your Linux box
# ls -l /sys/class/scsi_host/
total 0
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host0 -> ../../devices/pci0000:00/0000:00:07.1/host0/scsi_host/host0
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host1 -> ../../devices/pci0000:00/0000:00:07.1/host1/scsi_host/host1
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host10 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host10/scsi_host/host10
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host2 -> ../../devices/pci0000:00/0000:00:10.0/host2/scsi_host/host2
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host3 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host3/scsi_host/host3
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host4 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host4/scsi_host/host4
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host5 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host5/scsi_host/host5
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host6 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host6/scsi_host/host6
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host7 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host7/scsi_host/host7
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host8 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host8/scsi_host/host8
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host9 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host9/scsi_host/host9
But to detect a new hard drive attached you need to first get your host bus number used which you can get by using below command.
# grep mpt /sys/class/scsi_host/host?/proc_name
You should get a output like below
/sys/class/scsi_host/host2/proc_name:mptspi
So as you see your host2 is the relevant fiels where you need to reset the storage buffer values. Run the below command.
# echo "- - -" > /sys/class/scsi_host/host2/scan
Here "- - -" defines the three values stored inside host*/scan i.e. channel number, SCSI target ID, and LUN values. We are simply replacing the values with wild cards so that it can detect new changes attached to the Linux box. This procedure will add LUNs, but not remove them.
Once done verify if you can see the new hard drive which in my case worked very fine as I see below.
# fdisk -l
Disk /dev/sdb: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Suppose you have added a new hard disk to your Linux OS running on any Virtual Environment which in my case is VMware workstation. Once added the new hard drive, generally the changes won't reflect unless you reboot the Guest OS.
But what if you are in no position to reboot the Gues Linux OS?
Solution:
In the below path you can find a list of host symlinks pointing to the iscsi device configured on your Linux box
# ls -l /sys/class/scsi_host/
total 0
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host0 -> ../../devices/pci0000:00/0000:00:07.1/host0/scsi_host/host0
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host1 -> ../../devices/pci0000:00/0000:00:07.1/host1/scsi_host/host1
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host10 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host10/scsi_host/host10
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host2 -> ../../devices/pci0000:00/0000:00:10.0/host2/scsi_host/host2
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host3 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host3/scsi_host/host3
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host4 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host4/scsi_host/host4
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host5 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host5/scsi_host/host5
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host6 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host6/scsi_host/host6
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host7 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host7/scsi_host/host7
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host8 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host8/scsi_host/host8
lrwxrwxrwx 1 root root 0 Jun 14 05:08 host9 -> ../../devices/pci0000:00/0000:00:11.0/0000:02:05.0/host9/scsi_host/host9
But to detect a new hard drive attached you need to first get your host bus number used which you can get by using below command.
# grep mpt /sys/class/scsi_host/host?/proc_name
You should get a output like below
/sys/class/scsi_host/host2/proc_name:mptspi
So as you see your host2 is the relevant fiels where you need to reset the storage buffer values. Run the below command.
# echo "- - -" > /sys/class/scsi_host/host2/scan
Here "- - -" defines the three values stored inside host*/scan i.e. channel number, SCSI target ID, and LUN values. We are simply replacing the values with wild cards so that it can detect new changes attached to the Linux box. This procedure will add LUNs, but not remove them.
Once done verify if you can see the new hard drive which in my case worked very fine as I see below.
# fdisk -l
Disk /dev/sdb: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Subscribe to:
Posts (Atom)