RSS구독하기:SUBSCRIBE TO RSS FEED
즐겨찾기추가:ADD FAVORITE
글쓰기:POST
관리자:ADMINISTRATOR

How do I rescan the SCSI bus to add or remove a SCSI device without rebooting the computer?

 Updated 19 Nov 2012, 7:29 PM GMT

양식의 아래

Issue

·         It is possible to add or remove a SCSI device without rebooting a running system?

·         Can you scan a SCSI bus for new or missing SCSI devices without rebooting?

·         What is the Linux equivalent to the Solaris command `devfsadm` to add or remove storage devices?

·         How can I make newly connected SCSI devices available without rebooting?

·         I am trying to add a LUN to a live system but it is not recognized

·         How can I force a rescan of my SAN?

·         What to do if a newly allocated LUN on my SAN is not available?

Environment

·         Red Hat Enterprise Linux 5.0 or above   

o    SCSI devices over a Fibre Channel or iSCSI transport

Technical support for online storage reconfiguration is provided on Red Hat Enterprise Linux 5 and above. Limited tools for hot adding and removing storage are present in previous releases of Red Hat Enterprise Linux however they cannot be guaranteed to work correctly in all configurations.  Red Hat Enterprise Linux 5 includes many enhancements to udev, the low level device drivers, SCSI midlayer, and device-mapper multipath which enables comprehensive support for online storage reconfiguration.

This article, the Online Storage Reconfiguration Guide, and the Storage Administration Guide currently cover the FC and iSCSI transports. Future versions of this documentation will cover other SCSI transports, such as SAS and FCoE.

Hewlett-Packard SmartArray controllers and other hardware that uses the cciss driver provide a different interface for manipulating SCSI devices.  Users of this hardware can find a similar guide here.

The procedures below also apply to hypervisors (i.e. "dom0" in Red Hat Enterprise Linux 5 virtualization), but the procedures are different for dynamically altering the storage of running virtual guests. For more information about adding storage to virtual guests, see the Virtualization Guide.

Resolution

Yes, as of Red Hat Enterprise Linux 5.0, it is possible to make changes to the SCSI I/O subsystem without rebooting. There are a number of methods that can be used to accomplish this, some perform changes explicitly, one device at a time, or one bus at a time. Others are potentially more disruptive, causing bus resets, or potentially causing a large number of configuration changes at the same time. If the less-disruptive methods are used, then it is not necessary to pause I/O while the change is being made. If one of the more disruptive methods are used then, as a precaution, it is necessary to pause I/O on each of the SCSI busses that are involved in the change.

This article is a brief summary of the information contained in the Red Hat Enterprise Linux manuals. For Red Hat Enterprise Linux 5 refer to Online Storage Reconfiguration Guide. For Red Hat Enterprise Linux 6 refer to Storage Administration Guide.   You must refer to these documents for complete coverage of this topic.

Removing a Storage Device

Before removing access to the storage device itself, you may want to copy data from the device. When that is done, then you must stop and flush all I/O, and remove all operating system references to the device, as described below.  If this is a multipath device then you must do this for the multipath pseudo device, and each of the identifiers that represent a path to the device.

Removal of a storage device is not recommended when the system is under memory pressure, since the I/O flush will add to the load. To determine the level of memory pressure run the command:

vmstat 1 100

Device removal is not recommended if swapping is active (non-zero "si" and "so" columns in the vmstat output), and free memory is less than 5% of the total memory in more than 10 samples per 100.  (The total memory can be obtained with the "free" command.)

The general procedure for removing all access to a device is as follows:

      1. Close all users of the device. Copy data from the device, as needed.

      2. Use umount to unmount any file systems that mounted the device.

      3. Remove the device from any md and LVM volume that is using it. If the device is a member of an LVM Volume group, then it may be necessary to move data off the device using the pvmove command, then use the vgreduce command to remove the physical volume, and (optionally) pvremove to remove the LVM metadata from the disk.

      4. If you are removing a multipath device, run multipath -l and take note of all the paths to the device. When this has been done, remove the multipath device:

multipath -f multipath-device

Where multipath-device is the name of the multipath device mpath0, for example.

NOTE: This command may fail with "map in use" if the multipath device is still in use (for example, a partition is on the device).  Seehttps://access.redhat.com/kb/docs/DOC-56916 for further details.

      5. Use the following command to flush any outstanding I/O to all paths to the device:

blockdev --flushbufs device

      This is particularly important for raw devices, where there is no umount or vgreduce operation to cause an I/O flush.

      6. Remove any reference to the device's path-based name, like /dev/sd or /dev/disk/by-path or the major:minor number, in applications, scripts, or utilities on the system.  This is important to ensure that a different device, when added in the future, will not be mistaken for the current device.

      7. The final step is to remove each path to the device from the SCSI subsystem.  The command to remove a path is:

echo 1 >  /sys/block/device-name/device/delete

Where device-name may be sde, for example.

Another variation of this operation is:

echo 1 >  /sys/class/scsi_device/h:c:t:l/device/delete

Where h is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN.

You can determine the device-name and the h,c,t,l for a device from various commands, such as lsscsi, scsi_id, multipath -l, andls -l /dev/disk/by-*

If each of the steps above are followed, then a device can safely be removed from a running system. It is not necessary to stop I/O to other devices while this is done.

Other procedures, such as the physical removal of the device, followed by a rescan of the SCSI bus using rescan-scsi-bus or issue_lip to cause the operating system state to be updated to reflect the change, are not recommended. This may cause delays due to I/O timeouts, and devices may be removed/replaced unexpectedly. If it is necessary to perform a rescan of an interconnect, it must be done while I/O is paused. Refer to Online Storage Reconfiguration Guide and  Storage Administration Guide for more information.

Adding a Storage Device or a Path

When adding a device, be aware that the path-based device name (the “sd” name, the major:minor number, and /dev/disk/by-path name, for example) that the system assigns to the new device may have been previously in use by a device that has since been removed. Ensure that all old references to the path-based device name have been removed. Otherwise the new device may be mistaken for the old device.

The first step is to physically enable access to the new storage device, or a new path to an existing device.  This may involve installing cables, disks, and vendor-specific commands at the FC or iSCSI storage server. When you do this, take note of the LUN value for the new storage that will be presented to your host.

Next, make the operating system aware of the new storage device, or path to an existing device. The preferred command is:

echo "c t l" >  /sys/class/scsi_host/hostH/scan

where H is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN.

You can determine the H,c,t by refering to another device that is already configured on the same path as the new device. This can be done with commands such as lsscsi, scsi_id, multipath -l, and ls -l /dev/disk/by-*. This information, plus the LUN number of the new device, can be used as shown above to probe and configure that path to the new device.

Note: In some Fibre Channel hardware configurations, when a new LUN is created on the RAID array it may not be visible to the operating system until after a LIP (Loop Initialization Protocol) operation is performed. Refer to the manuals for instructions on how to do this. If a LIP is required, it will be necessary to stop I/O while this operation is done.

As of Red Hat Enterprise Linux 5.6, it is also possible to use the wildcard character "-" in place of c, t and/or l in the command shown above. In this case, it is not necessary to stop I/O  while this command executes. In versions prior to 5.6, the use of wildcards  in this command requires that I/O be paused as a precaution.

After adding all the SCSI paths to the device, execute the multipath command, and check to see that the device has been properly configured. At this point, the device is available to be added to md, LVM, mkfs, or mount, for example.

Other commands, that cause a SCSI bus reset, LIP, or a system-wide rescan, that may result in multiple add/remove/replace operations, are not recommended. If these commands are used, then I/O to the effected SCSI buses must be paused and flushed prior to the operation. Refer to the Online Storage Reconfiguration Guide and Storage Administration Guide for more information.

As of release 5.4, a script called /usr/bin/rescan-scsi-bus.sh is available as part of the sg3_utils package. This can make rescan operations easier. This script is described in the manuals mentioned above.`

 


last modified by Nitin Yewale on 01/10/12 - 11:54

Issue

How do I create persistent device names for attached SCSI devices that will not be changed on reboot or when new devices are added or existing devices are removed?

Environment
  • Red Hat Enterprise Linux (RHEL) 6

  • Red Hat Enterprise Linux 5.3 and later

  • Red Hat Enterprise Linux 4.7 and later
Resolution

The udev rules supplied with Red Hat Enterprise Linux 6, 5.3 and later, and 4.7 and later can create persistent device names for SCSI devices. These names are actually persistently-named symbolic links that appear in /dev/disk/by-id (for disk devices) and /dev/tape/by-id (for tape and media changer devices). These symbolic links, which use persistent device attributes (like serial numbers), do not change when devices are added or removed from the system (which causes reordering of the /dev/nst* devices, for example).

Use of these persistently-named symbolic links is highly desirable in, for instance, the configuration of backup software (which is commonly a static definition that binds a backup software device name to an operating system-level device file name).

Note: that these persistently-named symbolic links are created in addition to the default device file names in /dev (for example, /dev/nst0).

For Red Hat Enterprise Linux 4 only

By default, a Red Hat Enterprise Linux 4.7 system will not create these persistently-named symbolic links in /dev/[disk|tape]/by-id. For the persistently-named symbolic links to be created, /etc/scsi_id.config must be modified as follows:

options=-g -u

Following this modification, the system should be rebooted or run command start_udev to enable creation of the persistently-named symbolic links.

Comments

Reference Links - RHEL6
Reference Links - RHEL5
Component

Comments

Seems by default WWN are used.

I have three tape drives per WWN, so am not getting all necessary nodes.

How can I modify so that Serial Numbers not WWN numbers are used for tape drives?

For example,

root@> inquire

scsidev@0.0.0:SPECTRA PYTHON          2000|Autochanger (Jukebox), /dev/sg0

                                           S/N: 901F002454

                                           ATNN=SPECTRA PYTHON          901F002454

                                           WWNN=201F0090A5002454

scsidev@0.0.1:IBM     ULTRIUM-TD4     97F9|Tape, /dev/nst0

                                           S/N: 1011002454

                                           ATNN=IBM     ULTRIUM-TD4     1011002454

                                           WWNN=201F0090A5002454

scsidev@0.0.2:IBM     ULTRIUM-TD4     97F9|Tape, /dev/nst1

                                           S/N: 1012002454

                                           ATNN=IBM     ULTRIUM-TD4     1012002454

                                           WWNN=201F0090A5002454

scsidev@0.1.0:IBM     ULTRIUM-TD4     97F9|Tape, /dev/nst2

                                           S/N: 1014002454

                                           ATNN=IBM     ULTRIUM-TD4     1014002454

                                           WWNN=201F0090A5002454

scsidev@0.2.0:IBM     ULTRIUM-TD4     97F9|Tape, /dev/nst3

                                           S/N: 1021002454

                                           ATNN=IBM     ULTRIUM-TD4     1021002454

                                           WWNN=202F0090A5002454

scsidev@0.2.1:IBM     ULTRIUM-TD4     97F9|Tape, /dev/nst4

                                           S/N: 1022002454

                                           ATNN=IBM     ULTRIUM-TD4     1022002454

                                           WWNN=202F0090A5002454

<snip - removed remaining similar tape drives>

root# > ls  -al /dev/tape/by-id/

total 0

drwxr-xr-x 2 root root 180 Jun  4 05:48 .

drwxr-xr-x 3 root root  60 Jun  4 05:36 ..

lrwxrwxrwx 1 root root   9 Jun  4 05:36 scsi-3201f0090a5002454 -> ../../sg0

lrwxrwxrwx 1 root root  10 Jun  4 05:48 scsi-3201f0090a5002454-nst -> ../../nst1

lrwxrwxrwx 1 root root  10 Jun  4 05:48 scsi-3202f0090a5002454-nst -> ../../nst5

lrwxrwxrwx 1 root root  10 Jun  4 05:48 scsi-3203f0090a5002454-nst -> ../../nst6

lrwxrwxrwx 1 root root  10 Jun  4 05:48 scsi-3204f0090a5002454-nst -> ../../nst9

lrwxrwxrwx 1 root root  11 Jun  4 05:48 scsi-3205f0090a5002454-nst -> ../../nst14

lrwxrwxrwx 1 root root  11 Jun  4 05:48 scsi-3206f0090a5002454-nst -> ../../nst16

I assume that the common WWN number is the cause of the missing drive links.  Seems to select a random drive from within common WWN numbers, ignoring Serial Number (S/N)  and "ATTN" value.

Issue

When I try to run "yum update", the following error occurs:

# yum update
Loaded plugins: downloadonly, rhnplugin
rhel-i386-server | 1.4 kB 00:00
rhel-i386-server5/primary | 3.3 MB 00:01
Segmentation fault

Environment

Red Hat Enterprise Linux 5.5

Resolution

Move the /usr/local/lib/libz* files out of the way so it uses the Red Hat supplied libz* libraries:

mv /usr/local/lib/libz* /tmp

Root Cause

yum is using /usr/local/lib/libz.so.1 instead of the system libraries in /usr/lib and /lib.

Diagnostic Steps

1. Tried cleaning the yum cache:

# yum clean all
# yum clean metadata
# rm -rf /var/cache/yum/*
# rhn-profile-sync
# yum check-update

... but yum still segfaults

2. Customer is using 3rd party libraries:

$ cat etc/ld.so.conf
include ld.so.conf.d/*.conf
/usr/local/lib/
/usr/local/jpeg/lib/
/usr/local/freetype/lib/
/usr/local/gd/lib/
/usr/local/mysql/lib/
/export/sources/php-5.2.13/libs

3. Got an strace of yum:

# strace -ffvto trace_yumupdate.txt yum update
...$ cat  trace_yumupdate.txt.6566 | grep 'open.*/usr/local/lib' | grep -v ENOENT
15:21:40 open("/usr/local/lib/libz.so.1", O_RDONLY) = 6

... so yum is using the libz.so.1 library from /usr/local/lib instead of from /usr/lib or /lib

last modified by Imogen Flood-Murphy on 01/05/12 - 07:45

Issue

A Red Hat Enterprise Linux server which is not connected to the Internet, needs to be updated, and has no access to a RHN Satellite or Proxy server.

Resolution

There is a server which is offline and doesn't have any connection to the Internet.

Then we need station (or laptop / virtual machine), which has the same major Red Hat Enterprise Linux version as server and is connected to the Red Hat Network/Proxy/Satellite.

  • Copy the /var/lib/rpm to the station connected to the Internet (you can use USB/CD…)

    scp -r /var/lib/rpm root@station:/tmp/
    
  • Install the download only plugin for yum and createrepo on the machine which is connected to the Internet (Red Hat Network):

    yum install yum-downloadonly createrepo
    yum clean all
    
  • Backup the original rpm directory on the station and replace it with the rpm directory from the "offline" server:

    mv -v /var/lib/rpm /var/lib/rpm.orig
    mv -v /tmp/rpm /var/lib/
    
  • Download updates to /tmp/rpm_updates and return back the /var/lib/rpm

    mkdir -v /tmp/rpm_updates
    yum update --downloadonly --downloaddir /tmp/rpm_updates
    createrepo /tmp/rpm_updates
    rm -rvf /var/lib/rpm
    mv -v /var/lib/rpm.orig /var/lib/rpm
    
  • Transfer the downloaded rpms to the server and update:

    scp -r /tmp/rpm_updates root@server:/tmp/
    ssh root@server
    
    cat > /etc/yum.repos.d/rhel-offline-updates.repo << \EOF
    [rhel-offline-updates]
    name=Red Hat Enterprise Linux $releasever - $basearch - Offline Updates Repository
    baseurl=file:///tmp/rpm_updates
    enabled=1
    EOF
    
    yum upgrade
    

…and the server is updated.

These updates are the same as if "yum update" had been executed on a station that had a connection to the Internet.

last modified by Shane Bradley on 01/19/12 - 11:30

Issue
  • How do you configure an ILO 3 fence device for RHEL Clustering?
Environment
  • Red Hat Cluster Suite 4+
  • Red Hat Enterprise Linux 5 Advanced Platform (Clustering)
  • Red Hat Enterprise Linux Server 6 (with the High Availability Add on)
Resolution

Support for the iLO3 fence device has been added to the fence_ipmilan fence device in the following errata: http://rhn.redhat.com/errata/RHEA-2010-0876.html.

The iLO3 firmware should be a minimum of 1.15 as provided by HP.

On both cluster nodes, install the following OpenIPMI packages used for fencing:

$ yum install OpenIPMI OpenIPMI-tools

Stop and disable the 'acpid' daemon:

$ service acpid stop; chkconfig acpid off

Test ipmitool interaction with iLO3:

$ ipmitool -H <iloip> -I lanplus -U <ilousername> -P <ilopassword> chassis power status

The desired output is:

Chassis Power is on

Edit the /etc/cluster/cluster.conf to add the fence device:

<?xml version="1.0"?> 
<cluster alias="rh5nodesThree" config
version="32" name="rh5nodesThree">
<fencedaemon cleanstart="0" postfaildelay="1" postjoindelay="3"/>
<clusternodes>
<clusternode name="rh5node1.examplerh.com" nodeid="1" votes="1">
<fence>
<method name="1">
<device domain="rh5node1" name="ilo3node1"/>
</method>
</fence>
</clusternode>
<clusternode name="rh5node2.examplerh.com" nodeid="2" votes="1">
<fence>
<method name="1">
<device domain="rh5node2" name="ilo3
node2"/>
</method>
</fence>
</clusternode>
<clusternode name="rh5node3.examplerh.com" nodeid="3" votes="1">
<fence>
<method name="1">
<device domain="rh5node3" name="ilo3node3"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected
votes="3">
<multicast addr="229.5.1.1"/>
</cman>
<fencedevices>
<fencedevice agent="fenceipmilan" powerwait="10" ipaddr="XX.XX.XX.XX" lanplus="1" login="username" name="ilo3node1" passwd="password"/>
<fencedevice agent="fence
ipmilan" powerwait="10" ipaddr="XX.XX.XX.XX" lanplus="1" login="username" name="ilo3node2" passwd="password"/>
<fencedevice agent="fenceipmilan" powerwait="10" ipaddr="XX.XX.XX.XX" lanplus="1" login="username" name="ilo3node3" passwd="password"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>

Test that fencing is successful.  From node1 attempt to fence node2 as follows:

$ fencenode node2

For more information on fencing cluster nodes manually then see the following article: How do you manually call fencing agents from the commandline?

Component

last modified by Takayoshi Kimura on 02/14/12 - 02:36

Issue

The 2.6.11 Linux kernel introduced certain changes to the lpfc (emulex driver) and qla2xxx (Qlogic driver) Fibre Channel Host Bus Adapter (HBA) drivers which removed the following entries from the proc pseudo-filesystem: /proc/scsi/qla2xxx, /proc/scsi/lpfc. These entries had provided a centralized repository of information about the drivers and connected hardware. After the changes, the drivers started storing all this information within the /sys filesystem. Since Red Hat Enterprise Linux 5 uses version 2.6.18 of the Linux kernel it is affected by this change.

Using the /sys filesystem has the advantage that all the Fibre Channel drivers now use a unified and consistent manner to report data. However it also means that the data previously available in a single file is now scattered across a myriad of files in different parts of the /sys filesystem.

One basic example is the status of a Fibre Channel HBA: checking this can now be accomplished with the following command:

# cat /sys/class/scsi_host/host#/state

where host# is the H-value in the HBTL SCSI addressing format, which references the appropriate Fibre Channel HBA. For emulex adapters (lpfc driver) for example, this command would yield:

# cat /sys/class/scsi_host/host1/state
Link Up - Ready:
Fabric

For qlogic devices (qla2xxx driver) the output would instead be as follows:

# cat /sys/class/scsi_host/host1/state
Link Up - F_Port
Environment

Red Hat Enterprise Linux 5

Resolution

Obviously it becomes quite impractical to search through the /sys filesystem for the relevant files when there is a large variety of Fibre Channel-related information of interest. Instead of manual searching, the systool (1) command provides a simple but powerful means of examining and analyzing this information. Detailed below are several commands which demonstrate samples of information which the systool command can be used to examine.

To examine some simple information about the Fibre Channel HBAs in a machine:

# systool -c fchost -v

To look at verbose information regarding the SCSI adapters present on a system:

# systool -c scsihost -v

To see what Fibre Channel devices are connected to the Fibre Channel HBA cards:

# systool -c fcremoteports -v -d

For Fibre Channel transport information:

# systool -c fctransport -v

For information on SCSI disks connected to a system:

# systool -c scsidisk -v

To examine more disk information including which hosts are connected to which disks:

# systool -b scsi -v

Furthermore, by installing the sg3utils package it is possible to use the sgmap command to view more information about the SCSI map. After installing the package, run:

# modprobe sg

sg_map -x

Finally, to obtain driver information, including version numbers and active parameters, the following commands can be used for the lpfc and qla2xxx drivers respectively:

# systool -m lpfc -v

systool -m qla2xxx -v

ATTENTION: The syntax of the systool (1) command differs across versions of Red Hat Enterprise Linux. Therefore the commands above are only valid for Red Hat Enterprise Linux 5.

Adapters Supported: ServeRAID C100 (81Y4475)

Kernels Supported:
------------------
megasr_14.05.0701.2011-1_rhel6.1_32.img
 - kernel-2.6.32-131.0.15.el6.i686

megasr_14.05.0701.2011-1_rhel6.1_64.img
 - kernel-2.6.32-131.0.15.el6.x86_64


(C) Copyright International Business Machines Corporation 1999, 2011. All 
rights reserved.  US Government Users Restricted Rights - Use, duplication, 
or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Note: Before using this information and the product it supports, read the 
general information in "Notices and trademarks" in this document.


CONTENTS
________

1.0  Overview
2.0  Installation and setup instructions
     2.1 Working with driver image files to create driver installation media
     2.2 Network operating system installation instructions
     2.3 Troubleshooting tips
3.0  Configuration information 
4.0  Unattended mode
5.0  Web site and support phone number
6.0  Notices and trademarks
7.0  Disclaimer


1.0  Overview
_____________

  1.1    This update includes a new device driver for the ServeRAID C100 
         supporting Red Hat Enterprise Linux 6 (RHEL 6).  

  1.2    Limitations:
         - None

  1.3    Problems fixed:
         - See change history

  1.4    Level of Recommendations and Prerequisites for the update:
         - None

  1.5    Dependencies:
         - None

  1.6    Update Contents:
          o  ibm_dd_megasr_14.05.0701.2011_rhel6_32-64.zip
             - Driver update image
          o  ibm_dd_megasr_14.05.0701.2011_rhel6_32-64
             - Change history


2.0  Installation and setup instructions
________________________________________

  Use the following set of instructions to install the supported network 
  operating systems.

  2.1 Working with driver image files to create driver installation media
  -----------------------------------------------------------------------

  These driver images can be used to create a USB key, CD, DVD, or floppy disk 
  containing the driver formatted for use during the installation of the 
  operating system.
  
  1) Copy the .zip file to a temporary directory and extract it.
  
  2) Using the list of supported kernels at the top of this readme, determine 
     which set of .img files you will need for your installation.  Use these 
     files wherever 'the .img file' is referenced in this readme.
  
  3) Using the .img file from your set, create a driver update disk on a USB 
     key, CD, DVD, floppy or other media using the instructions below for your 
     media type.
  
     USB Key:
     --------
     There are two different partitioning methods for USB keys.  One of the 
     methods below will work and the other will not, depending on which way 
     your key is partitioned.  The easiest way to discover which is correct 
     for your key is to try the Quick Copy Method first.  If this method is 
     not correct you will receive a message stating that no driver could be 
     found on your media when you try to load the driver in step 3 below.  If 
     that occurs, you can use the Extraction Method and reinsert the key.  Use 
     the Back button on the installation screens to re-detect the key.  You 
     should not need to reboot or start the installation over.
     
     Quick Copy Method:  Copy the .img file to the root directory of the USB 
     key.  You do not need to remove other files from the key unless there is 
     less space than necessary for the two files.
     
     Extraction Method:  Use an img-to-media application (such as dd, rawrite,
     or emt4win, or ardi4usb) to extract the image to the key.  This method 
     will overwrite all data on the key, so you will need to remove all other 
     files before extracting to the key.  Follow the instructions that came 
     with your img-to-media application to correctly extract to your key.  
  
     All other media:
     ----------------
     Use an img-to-media application (such as dd, rawrite, emt4win, or 
     ardi4usb) to extract the image to the media.  This method will overwrite 
     all data on the media.  If you are using rewritable media, you will need 
     to remove all other files before performing the extraction.  Follow the 
     instructions that came with your img-to-media application to correctly 
     extract to your media.

  2.2 Network operating system installation instructions
  ------------------------------------------------------

  Follow these instructions to add the ServeRAID C100 for System x driver 
  during the installation of RHEL 6.

  -----------------------------------------------------------------------------
  For Legacy Installations:

  Install instructions support the following NOS's:
    - Red Hat Enterprise Linux 6.1 Server Edition
         Driver Media:
            megasr_14.05.0701.2011-1_rhel6.1_32.img

    - Red Hat Enterprise Linux 6.1 Server x64 Edition
         Driver Media:
            megasr_14.05.0701.2011-1_rhel6.1_64.img

  Server Preparations:
  - Enable ServeRAID C100 (Software RAID) in F1 Setup and create a RAID volume 
    per the User Guide instructions.
  - For 64-bit versions, configure the "Legacy Only" boot option within F1 
    setup | Boot Manager.

  Installation Procedure:
  1.  Create MEGASR driver diskette or USB Key and attach the device to the 
      server.
  2.  Boot to RHEL 6 installation media to begin install.
  3.  At the "Welcome to RHEL 6" screen, highlight "Install or upgrade an  
      existing system" then press "Tab" to edit the boot options,
  4.  Add the following boot parameters to the to the end of the existing 
      line using either of the following two sets paramters:

        linux dd blacklist=ahci

        -or-

	linux dd noprobe=ata1 noprobe=ata2 noprobe=ata3 noprobe=ata4

      Press "Enter" to start the install.

  5.  When prompted, choose "Yes" to having a driver disk.
  6.  Select the device (diskette or USB key) for the MEGASR driver location.
  7.  Install any additional drivers or cancel to continue.
  8.  On the next screen, either verify the media or skip the media test as 
      prompted.
  9.  The graphic portion of the installation will begin.  Continue the 
      installation following the screens through to completion.

  -----------------------------------------------------------------------------
  For native uEFI installations:

  Follow these instructions to add the ServeRAID C100 for System x driver 
  during the installation of RHEL 6.

  Install instructions support the following NOS's:
    - Red Hat Enterprise Linux 6.1 Server x64 Edition
         Driver Media:
            megasr_14.05.0701.2011-1_rhel6.1_64.img

  Server Preparations:
  - Enable ServeRAID C100 (Software RAID) in F1 Setup and create a RAID volume 
    per the User Guide instructions.
  - Ensure the "Legacy Only" boot option within F1 setup | Boot Manager is 
    removed.

  Installation Procedure:
  1.  Create MEGASR driver diskette or USB Key and attach the device to the 
      server.
  2.  Boot to RHEL 6 installation media to begin install.
  3.  When prompted with "Booting Red Hat Enterprise Linux 6.1 in seconds...",
      press any key.
  4.  From the GNU GRUB menu, edit "Red Hat Enterprise Linux 6.1" and add either 
      of the following two sets paramters to the end of the line:

        linux dd blacklist=ahci  

        -or-

        linux dd noprobe=ata1 noprobe=ata2 noprobe=ata3 noprobe=ata4

      Press "Enter" to save the changes and press "b" to boot with the new 
      options.

  5.  When prompted, choose "Yes" to having a driver disk.
  6.  Select the device (diskette or USB key) for the MEGASR driver location.
  7.  Install any additional drivers or cancel to continue.
  8.  On the next screen, either verify the media or skip the media test as 
      prompted.
  9.  The graphic portion of the installation will begin.  Continue the 
      installation following the screens through to completion.


  2.2 Troubleshooting tips
  ------------------------
    None


3.0  Configuration information
______________________________
		
  For detailed setup instructions for your controller, refer to the 
  ServeRAID C100 User's Guide.


4.0  Unattended Mode
____________________

  Not supported.


5.0 Web Sites and Support Phone Number
______________________________________

  o  You can find support and downloads for IBM products from the IBM Support 
     Web site:

     http://www.ibm.com/support/
     
     You can find support and downloads specific to disk controllers by 
     searching for the "Disk Controller and RAID Software Matrix" from the 
     main support page.

  o  For the latest compatibility information, see the IBM ServerProven Web 
     site:

     http://www-03.ibm.com/servers/eserver/serverproven/compat/us/

  o  With the original purchase of an IBM hardware product, you have access 
     to extensive support coverage.  During the IBM hardware product warranty 
     period, you may call the IBM HelpCenter (1-800-IBM-SERV in the U.S.) 
     for hardware product assistance covered under the terms of the 
     IBM hardware warranty.


6.0 Trademarks and Notices
__________________________

  This product may contain program code or packages ("code") licensed by third 
  parties, as well as code licensed by IBM.   For non-IBM Code, the third 
  parties, not IBM, are the licensors.  Your use of the non-IBM code is 
  governed by the terms of the license accompanying that code, as identified 
  in the attached files.  You acknowledge that you have read and agree to the 
  license agreements contained in these files. If you do not agree to the 
  terms of these third party license agreements, you may not use the 
  accompanying code.

  IBM and ServeRAID are trademarks or registered trademarks of International 
  Business Machines Corporation in the United States and other countries.

  LSI and MegaRAID are trademarks or registered trademarks of LSI Logic, Corp 
  in the United States and other countries.

  Linux is a registered trademark of Linus Torvalds in the United States and 
  other countries.

  Other company, product, and service names may be trademarks or service marks 
  of others.


7.0 Disclaimer
______________

  THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND.
  IBM DISCLAIMS ALL WARRANTIES, WHETHER EXPRESS OR IMPLIED,
  INCLUDING WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF FITNESS
  FOR A PARTICULAR PURPOSE AND MERCHANTABILITY WITH RESPECT TO THE
  INFORMATION IN THIS DOCUMENT.  BY FURNISHING THIS DOCUMENT, IBM
  GRANTS NO LICENSES TO ANY PATENTS OR COPYRIGHTS.

  Note to U.S. Government Users -- Documentation related to
  restricted rights -- Use, duplication or disclosure is subject
  to restrictions set forth in GSA ADP Schedule Contract with
  IBM Corporation.

last modified by Ray Dassen on 08/13/11 - 04:57

Issue

What is the SysRq facility and how do I use it?

Environment
  • Red Hat Enterprise Linux 3, 4, 5, and 6
Resolution
What is the "Magic" SysRq key?

According to the Linux kernel documentation:

It is a 'magical' key combo you can hit which the kernel will respond to regardless of whatever else it is doing, unless it is completely locked up.

The sysrq key is one of the best (and sometimes the only) way to determine what a machine is really doing. It is useful when a system appears to be "hung" or for diagnosing elusive, transient, kernel-related problems.

How do I enable and disable the SysRq key?

For security reasons, Red Hat Enterprise Linux disables the SysRq key by default. To enable it, run:

# echo 1 > /proc/sys/kernel/sysrq

To disable it:

# echo 0 > /proc/sys/kernel/sysrq

To enable it permanently, set the kernel.sysrq value in /etc/sysctl.conf to 1. That will cause it to be enabled on reboot.

# grep sysrq /etc/sysctl.conf
kernel.sysrq = 1

Since enabling sysrq gives someone with physical console access extra abilities, it is recommended to disable it when not troubleshooting a problem or to ensure that physical console access is properly secured.

How do I trigger a sysrq event?

There are several ways to trigger a sysrq event. On a normal system, with an AT keyboard, sysrq events can be triggered from the console with the following key combination:

Alt+PrintScreen+[CommandKey]

For instance, to tell the kernel to dump memory info (command key "m"), you would hold down the Alt and Print Screen keys, and then hit the m key.

Note that this will not work from an X Window System screen. You should first change to a text virtual terminal. Hit Ctrl+Alt+F1 to switch to the first virtual console prior to hitting the sysrq key combination.

On a serial console, you can achieve the same effect by sending a Breaksignal to the console and then hitting the command key within 5 seconds. This also works for virtual serial console access through an out-of-band service processor or remote console like HP iLO, Sun ILOM and IBM RSA. Refer to service processor specific documentation for details on how to send a Breaksignal; for example, How to trigger SysRq over an HP iLo Virtual Serial Port (VSP).

If you have a root shell on the machine (and the system is responding enough for you to do so), you can also write the command key character to the/proc/sysrq-trigger file. This is useful for triggering this info when you are not on the system console or for triggering it from scripts.

# echo 'm' > /proc/sysrq-trigger
When I trigger a sysrq event that generates output, where does it go?

When a sysrq command is triggered, the kernel will print out the information to the kernel ring buffer and to the system console. This information is normally logged via syslog to /var/log/messages.

Unfortunately, when dealing with machines that are extremely unresponsive, syslogd is often unable to log these events. In these situations, provisioning a serial console is often recommended for collecting the data.

What sort of sysrq events can be triggered?

There are several sysrq events that can be triggered once the sysrq facility is enabled. These vary somewhat between kernel versions, but there are a few that are commonly used:

  • m - dump information about memory allocation

  • t - dump thread state information

  • p - dump current CPU registers and flags

  • c - intentionally crash the system (useful for forcing a disk or netdump)

  • s - immediately sync all mounted filesystems

  • u - immediately remount all filesystems read-only

  • b - immediately reboot the machine

  • o - immediately power off the machine (if configured and supported)

  • f - start the Out Of Memory Killer (OOM)

  • w - dumps tasks that are in uninterruptable (blocked) state

last modified by Andrius Benokraitis on 10/04/11 - 12:21

NOTE: The following information has been provided by Red Hat, but is outside the scope of our posted Service Level Agreements (https://www.redhat.com/support/service/sla/ ) and support procedures. The information is provided as-is and any  configuration settings or installed applications made from the  information in this article could make your Operating System unsupported  by Red Hat Support Services. The intent of this article is to provide  you with information to accomplish your system needs. Use the  information in this article at your own risk.

Issue

  • Red Hat Network (RHN) does not contain Red Hat Enterprise Linux 4.9 installation ISOs.[1]

Environment

  • Red Hat Enterprise Linux 4.8 without access to Red Hat Network

Resolution

  • Create a Reference System that connects to Red Hat Network and downloads the latest RHEL 4 packages. Those downloaded packages are then used to upgrade the Target System from Red Hat Enterprise Linux 4.8 to Red Hat Enterprise Linux 4.9 without connecting to Red Hat Network.
  • Reference System: Red Hat Enterprise Linux 4.8 installed and connected to Red Hat Network

  • Target System: Red Hat Enterprise Linux 4.8 installed but not connected to Red Hat Network

  • It is assumed that the Reference System is identical or similar to the Target System, including architecture type. If they cannot be similar, it is recommended that the Reference System be an @everything installation to minimize missed package updates.

Reference System Setup
  • Issue the following commands as root user on the Reference System after installing a base Red Hat Enterprise Linux 4.8 system from Red Hat Network.
  • Ensure there are no previously downloaded RPMs on the system:

rm -rf /var/spool/up2date/*

  • Download all available updates (including those on the "skip" list) from RHN and stores them in /var/spool/up2date :

up2date -u -v -d -f

  • Transfer the downloaded packages to an empty mounted device for later use on the Target System:

cp /var/spool/up2date/*.rpm /media/flash_drive

Target System Setup
  • Perform the following actions as root user on the Target System after completing the previous steps with the Reference System.
  • Mount the device containing the updated packages.

  • Edit the /etc/sysconfig/rhn/sources file with the following:

... #up2date default 
dir rhel49 /media/flash_drive

...

By modifying/including these commands, the default search directory is disabled, and is replaced with the local mounted device.

  • Import the RPM GPG key:

rpm --import /usr/share/rhn/RPM-GPG-KEY

  • Update all packages (including the kernel) on the Target System:

up2date -uf

  • Reboot the system.

[1]  The Red Hat Enterprise Linux 4 Life Cycle entered Production 3 Phase on 16-Feb-2011 with the release of Red Hat Enterprise Linux 4.9. No new features, hardware support or updated installation images (ISOs) are released during the Production 3 phase. Refer to the Red Hat Enterprise Linux Support Policy for details on the life cycle of Red Hat Enterprise Linux releases.

last modified by Raghu Udiyar on 12/09/11 - 15:24

Release found: Red Hat Enterprise Linux 5

Problem

You need to install Red Hat Enterprise Linux on a server which does not have a floppy drive or CD-ROM drive, but which does has a USB port.

Assumptions

  • Your network environment is not set up to allow Red Hat Enterprise Linux to be installed completely from the network (through PXE boot). If it is, please make use of this option, as it is more straightforward than the procedure documented here.
  • Your network environment is configured to provide the contents of the Red Hat Enterprise Linux DVDs through a protocol supported by the Red Hat Enterprise Linux installer, such as NFS or FTP.
  • The server's BIOS supports booting from a USB mass storage device like a flash/pen drive.

Solution

The following steps configure a USB pen drive as a boot medium to start the installation of Red Hat Enterprise Linux.

  1. Attach the USB pen drive to a system which is already running Red Hat Enterprise Linux.
  2. Run

    dmesg

  3. From the dmesg output,  identify the device name under which the drive is known to the system.

    Sample messages for a 1 Gb flash disk being recognized as /dev/sdb:

    Initializing USB Mass Storage driver... scsi2 : SCSI emulation for USB Mass Storage devices usb-storage: device found at 5 usb-storage: waiting for device to settle before scanning usbcore: registered new driver usb-storage USB Mass Storage support registered.   Vendor: USB 2.0   Model: Flash Disk        Rev: 5.00   Type:   Direct-Access                      ANSI SCSI revision: 02 SCSI device sdb: 2043904 512-byte hdwr sectors (1046 MB) sdb: Write Protect is off sdb: Mode Sense: 0b 00 00 08 sdb: assuming drive cache: write through SCSI device sdb: 2043904 512-byte hdwr sectors (1046 MB) sdb: Write Protect is off sdb: Mode Sense: 0b 00 00 08 sdb: assuming drive cache: write through sdb: sdb1 sd 2:0:0:0: Attached scsi removable disk sdb 
    sd 2:0:0:0: Attached scsi generic sg1 type 0

    usb-storage: device scan complete

  4. Note: For the remainder of this article, we will assume this device name to be /dev/sdb. Make sure you adjust the device references in the following steps as per your local situation.

  5. At this point, the flash drive is likely to have been automatically mounted by the system. Make sure the flash drive is unmounted. E.g. in nautilus, by right-clicking on the icon for the drive and selecting Unmount Volume.
  6. Use fdisk to partition the flash drive as follows:
    • There is a  single partition.
    • This partition is numbered as 1.
    • Its partition type is set to 'b' (W95 FAT32).
    • It is tagged as bootable.
  7. Format the partition created in the previous step as FAT:

    mkdosfs /dev/sdb1

  8. Mount the partition:

    mount /dev/sdb1 /mnt

  9. Copy the contents of /RedHat/isolinux/ from the first installation CD/DVD onto the flash drive, i.e. to /mnt.

    Note: the files isolinux.binboot.cat and TRANS.TBL are not needed and can thus be removed or deleted.

  10. Rename the configuration file:

    cd /mnt/; mv isolinux.cfg syslinux.cfg

  11. Copy the installer's initial RAM disk /RedHat/images/pxeboot/initrd.img from the first installation CD/DVD onto the flash drive, i.e. to /mnt.

  12. Optional step: To configure any boot settings, edit the syslinux.cfg on the USB flash drive. For example to configure the installation to use a kickstart file shared over NFS, specify the following:

    linux ks=nfs:://ks.cfg

  13. Unmount the flash drive:

    umount /dev/sdb1

  14. Make the USB flash drive bootable. The flash drive must be unmounted for this to work properly.

    syslinux /dev/sdb1

  15. Mount the flash drive again:

    mount /dev/sdb1 /mnt

  16. Install GRUB on the USB flash drive:

    grub-install --root-directory=/mnt /dev/sdb

  17. Verify that the USB flash drive has a /boot/grub directory. If it does not, create the directory manually.

    cd /mnt

    mkdir -p boot/grub

  18. Create the grub.conf file. Below is a sample grub.conf:

    default=0 timeout=5 root (hd1,0) title Red Hat Enterprise Linux installer 
    kernel /vmlinuz

    initrd /initrd.img

  19. Copy or confirm the created grub.conf file is on the /boot/grub/ directory of the USB flash drive.

  20. Unmount the flash drive:

    umount /dev/sdb1

  21. At this point, the USB disk should be bootable.

  22. Attach the USB disk to the system you wish to install Red Hat Enterprise Linux on.
  23. Boot from the USB disk. Refer to the hardware vendor's BIOS documentation for details on changing the order in which devices are checked for booting from.
  24. Once you are booted in the Red Hat Enterprise Linux installer, continue with your network installation of choice.
우주곰:지구곰이 아닙니다.
지구곰이 아닙니다.
Categories (190)
Information (5)
About uzoogom (5)
My Advanced Linux (73)
Learning Linux (96)
OperatingSystem (5)
Databases (4)
Tips! (1)
OpenSource (1)
«   2018/01   »
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31      
  1. 2012/12 (2)
  2. 2012/04 (3)
  3. 2012/03 (6)
  4. 2012/02 (6)
  5. 2012/01 (2)