Dell Server 3000i User Manual

Dell™ PowerVault™ Modular Disk 3000i  
Systems Installation Guide  
w w w. d e l l . c o m | s u p p o r t . d e l l . c o m  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Contents  
1
Introduction  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
10  
13  
Direct-Attached Solutions  
.
.
.
.
.
.
Network-Attached Solutions .  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Installing MD Storage Software.  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23  
Installing MD Storage Software on an iSCSI-attached Host  
Server (Windows).  
Installing MD Storage Software on an iSCSI-attached Host  
Server (Linux) .  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25  
27  
Installing a Dedicated Management Station (Windows and Linux) .  
Contents  
3
Download from Www.Somanuals.com. All Manuals Search And Download.  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
28  
28  
Viewing Resource CD Contents.  
.
.
Installing the Manuals  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
Before You Start .  
Using iSNS .  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45  
45  
45  
46  
46  
What is CHAP?  
Target CHAP  
Mutual CHAP .  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CHAP Definitions .  
How CHAP Is Set Up  
4
Contents  
Download from Www.Somanuals.com. All Manuals Search And Download.  
If you are using Windows Server 2003 or Windows Server 2008  
.
.
.
.
57  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Viewing the status of your iSCSI connections .  
5
.
.
.
.
.
.
Windows Host Setup.  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
64  
Configuring TCP/IP on Linux using DHCP (root users only)  
Configuring TCP/IP on Linux using a Static IP address  
65  
(root users only).  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
65  
Index  
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
67  
Contents  
5
Download from Www.Somanuals.com. All Manuals Search And Download.  
6
Contents  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Introduction  
This guide outlines the steps for configuring the Dell™ PowerVault™ Modular Disk 3000i  
(MD3000i). The guide also covers installing the MD Storage Manager software, installing and  
®
configuring the Microsoft iSCSI and Linux initiators, and accessing documentation from the  
PowerVault MD3000i Resource CD. Other information provided includes system requirements,  
storage array organization, initial software startup and verification, and discussions of utilities and  
premium features.  
MD Storage Manager enables an administrator to configure and monitor storage arrays for optimum  
®
®
usability. MD Storage Manager operates on both Microsoft Windows and Linux operating  
systems and can send alerts about storage array error conditions by either e-mail or Simple Network  
Management Protocol (SNMP). These alerts can be set for instant notification or at regular intervals.  
System Requirements  
Before installing and configuring the MD3000i hardware and MD Storage Manager software, ensure  
that the operating system is supported and minimum system requirements are met. For more  
information, refer to the DellPowerVaultMD3000i Support Matrix available on  
support.dell.com.  
Management Station Hardware Requirements  
A management station uses MD Storage Manager to configure and manage storage arrays across the  
network. Any system designated as a management station must be an x86-based system that meets  
the following minimum requirements:  
®
®
Intel Pentium or equivalent CPU (133 MHz or faster)  
128 MB RAM (256 MB recommended)  
120 MB disk space available  
Administrator or equivalent permissions  
Minimum display setting of 800 x 600 pixels with 256 colors (1024 x 768 pixels with 16-bit color  
recommended)  
Introduction  
7
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Introduction to Storage Arrays  
A storage array includes various hardware components, such as physical disks, RAID controller modules,  
fans, and power supplies, gathered into enclosures. An enclosure containing physical disks accessed  
through RAID controller modules is called a RAID enclosure.  
One or more host servers attached to the storage array can access the data on the storage array. You can  
also establish multiple physical paths between the host(s) and the storage array so that loss of any single  
path (through failure of a host server port, for example) does not result in total loss of access to data on the  
storage array.  
The storage array is managed by MD Storage Manager software running either on a host server or a  
dedicated management station. On a host server system, MD Storage Manager and the storage array  
communicate management requests and event information directly via iSCSI ports. On a dedicated  
management station, MD Storage Manager communicates with the storage array either through an  
Ethernet connection on the RAID controller modules or via the host agent installed on the host server.  
Using MD Storage Manager, you configure the physical disks in the storage array into logical components  
called disk groups, then divide the disk groups into virtual disks. You can make as many disk groups and  
virtual disks as your storage array configuration and hardware permit. Disk groups are created in the  
unconfigured capacity of a storage array, while virtual disks are created in the free capacity of a disk  
group.  
Unconfigured capacity is comprised of the physical disks not already assigned to a disk group. When a  
virtual disk is created using unconfigured capacity, a disk group is automatically created. If the only  
virtual disk in a disk group is deleted, the disk group is also deleted. Free capacity is space in a disk group  
that has not been assigned to a virtual disk.  
Data is written to the physical disks in the storage array using RAID technology. RAID levels define the  
way in which data is written to physical disks. Different RAID levels offer different levels of accessibility,  
redundancy, and capacity. You can set a specified RAID level for each disk group and virtual disk on your  
storage array.  
You can also provide an additional layer of data redundancy by creating disk groups that have a RAID  
level other than 0. Hot spares can automatically replace physical disks marked as Failed.  
For more information on using RAID and managing data in your storage solution, see the Dell™  
PowerVault™ Modular Disk Storage Manager User’s Guide.  
8
Introduction  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
Hardware Installation  
This chapter provides guidelines for planning the physical configuration of your Dell™ PowerVault™  
MD3000i storage array and for connecting one or more hosts to the array. For complete information  
on hardware configuration, see the DellPowerVaultMD3000i Hardware Owner’s Manual.  
Storage Configuration Planning  
Consider the following items before installing your storage array:  
Evaluate data storage needs and administrative requirements.  
Calculate availability requirements.  
Decide the frequency and level of backups, such as weekly full backups with daily partial backups.  
Consider storage array options, such as password protection and e-mail alert notifications for error  
conditions.  
Design the configuration of virtual disks and disk groups according to a data organization plan.  
For example, use one virtual disk for inventory, a second for financial and tax information, and a  
third for customer information.  
Decide whether to allow space for hot spares, which automatically replace failed physical disks.  
If you will use premium features, consider how to configure virtual disk copies and snapshot  
virtual disks.  
About the Enclosure Connections  
The RAID array enclosure is connected to an iSCSI-enabled host server via one or two RAID  
controller modules. The RAID controller modules are identified as RAID controller module 0 and  
RAID controller module 1 (see the PowerVault MD3000i Hardware Owner’s Manual for more  
information).  
Each RAID controller module contains two iSCSI In port connectors that provide direct connections  
to the host server or switches. iSCSI In port connectors are labeled In-0 and In-1(see the PowerVault  
MD3000i Hardware Owner’s Manual for more information).  
Each MD3000i RAID controller module also contains an Ethernet management port and a SAS Out  
port connector. The Ethernet management port allows you to install a dedicated management  
station (server or standalone system). The SAS Out port allows you to connect the RAID enclosure  
to an optional expansion enclosure (MD1000) for additional storage capacity.  
Hardware Installation  
9
Download from Www.Somanuals.com. All Manuals Search And Download.  
                     
Cabling the Enclosure  
You can connect up to 16 hosts and two expansion enclosures to the storage array.  
To plan your configuration, complete the following tasks:  
1
2
3
4
Evaluate your data storage needs and administrative requirements.  
Determine your hardware capabilities and how you plan to organize your data.  
Calculate your requirements for the availability of your data.  
Determine how you plan to back up your data.  
The iSCSI interface provides many versatile host-to-controller configurations. For the purposes of this  
manual, the most conventional topologies are described. The figures in this chapter are grouped  
according to the following general categories:  
Direct-attached solutions  
Network-attached (SAN) solutions  
Redundancy vs. Nonredundancy  
Nonredundant configurations, configurations that provide only a single data path from a host to the  
RAID enclosure, are recommended only for non-critical data storage. Path failure from a failed or  
removed cable, a failed NIC, or a failed or removed RAID controller module results in loss of host access  
to storage on the RAID enclosure.  
Redundancy is established by installing separate data paths between the host and the storage array, in  
which each path is to different RAID controller modules. Redundancy protects the host from losing  
access to data in the event of path failure, because both RAID controllers can access all the disks in the  
storage array.  
Direct-Attached Solutions  
You can cable from the Ethernet ports of your host servers directly to your MD3000i RAID controller  
iSCSI ports. Direct attachments support single path configurations (for up to four servers) and dual path  
data configurations (for up to two servers) for both single and dual controller modules.  
With a single path configuration, a group of heterogeneous clients can be connected to the MD3000i  
RAID controller through a single physical Ethernet port. Because there is only the single port, there is no  
redundancy (although each iSCSI portal supports multiple connections). This configuration is  
supported for both single controller and dual controller modes.  
Figure 2-1 and Figure 2-2 show the supported nonredundant cabling configurations to MD3000i RAID  
controller modules using the single path data configuration. Figure 2-1 shows a single controller array  
configuration. Figure 2-2 shows how four standalone servers are supported in a dual controller array  
configuration.  
10  
Hardware Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
Figure 2-1. One or Two Direct-Attached Servers (or Two-Node Cluster), Single-Path Data, Single Controller (Simplex)  
2
3
1
Management traffic  
5
4
1
4
standalone (one or  
two) host server  
2
5
two-node cluster  
3
Ethernet  
management port  
MD3000i RAID  
Enclosure (single  
controller)  
corporate, public or  
private network  
Hardware Installation  
11  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Figure 2-2. Up to Four Direct-Attached Servers, Single-Path Data, Dual Controllers (Duplex)  
1
Management traffic  
2
4
3
1
4
standalone (up to  
four) host server  
2
Ethernet management  
port (2)  
3
MD3000i RAID  
Enclosure (dual  
controllers)  
corporate, public or  
private network  
Dual Path Data Configuration  
In Figure 2-3, up to two servers are directly attached to the MD3000i RAID controller module. If the host  
server has a second Ethernet connection to the array, it can be attached to the iSCSI ports on the array’s  
second controller. This configuration provides improved availability by allowing two separate physical  
paths for each host, which ensures full redundancy if one of the paths fail.  
12  
Hardware Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Figure 2-3. One or Two Direct-Attached Servers (or Two-Node Cluster), Dual-Path Data, Dual Controllers (Duplex)  
2
1
3
5
4
1
4
standalone (one or  
two) host server  
2
5
two-node cluster  
3
Ethernet  
management port  
(2)  
MD3000i RAID  
Enclosure (dual  
controllers)  
corporate, public or  
private network  
You can also cable your host servers to the MD3000i RAID controller iSCSI ports through an IP storage  
area network (SAN) industry-standard 1GB Ethernet switch. By using an IP SAN "cloud" Ethernet  
switch, the MD3000i RAID controller can support up to 16 hosts simultaneously with multiple  
connections per session. This solution supports either single- or dual-path data configurations, as well as  
either single or dual controller modules.  
Figure 2-4 shows how up to 16 standalone servers can be attached (via multiple sessions) to a single  
MD3000i RAID controller module through a network. Hosts that have a second Ethernet connection to  
the network allow two separate physical paths for each host, which ensures full redundancy if one of the  
paths fail. Figure 2-5 shows how the same number of hosts can be similarly attached to a dual MD3000i  
RAID controller array configuration.  
Hardware Installation  
13  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Figure 2-4. Up to 16 SAN-Configured Servers, Single-Path Data, Single Controller (Simplex)  
1
2
3
5
4
1
4
up to 16 standalone  
host servers  
2
5
IP SAN (Gigabit  
Ethernet switch)  
3
Ethernet  
management port  
MD3000i RAID  
Enclosure (single  
controller)  
corporate, public or  
private network  
14  
Hardware Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Figure 2-5. Up to 16 Dual SAN-Configured Servers, Dual-Path Data, Dual Controllers (Duplex)  
1
2
3
5
4
1
4
up to 16 standalone  
host servers  
2
5
IP SAN (dual Gigabit  
Ethernet switches)  
3
Ethernet  
management port  
(2)  
MD3000i RAID  
Enclosure (dual  
controllers)  
corporate, public or  
private network  
Attaching MD1000 Expansion Enclosures  
One of the features of the MD3000i is the ability to add up to two MD1000 expansion enclosures for  
additional capacity. This expansion increases the maximum physical disk pool to 45 3.5" SAS and/or  
SATA II physical disks.  
As described in the following sections, you can expand with either a brand new MD1000 or an MD1000  
that has been previously configured in a direct-attach solution with a PERC 5/E system.  
NOTICE: Ensure that all MD1000 expansion enclosures being connected to the MD3000i are first updated to the  
latest Dell MD1000 EMM Firmware (available from support.dell.com). Dell MD1000 EMM Firmware versions prior to  
A03 are not supported in an MD3000i array; attaching an MD1000 with unsupported firmware causes an uncertified  
condition to exist on the array. See the following procedure for more information.  
Hardware Installation  
15  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Expanding with Previously Configured MD1000 Enclosures  
Use this procedure if your MD1000 is now directly attached to and configured on a Dell PERC 5/E  
system. Data from virtual disks created on a PERC 5 SAS controller cannot be directly migrated to an  
MD3000i or to an MD1000 expansion enclosure connected to an MD3000i.  
NOTICE: If an MD1000 that was previously attached to PERC 5 SAS controller is used as an expansion enclosure  
to an MD3000i, the physical disks of the MD1000 enclosure will be reinitialized and data will be lost. All data on the  
MD1000 must be backed up before attempting the expansion.  
Perform the following steps to attach previously configured MD1000 expansion enclosures to the MD3000i:  
1
2
Back up all data on the MD1000 enclosure(s).  
While the enclosure is still attached to the PERC 5 controller, upgrade the MD1000 firmware to  
version A03 or above. Windows systems users can reference the DUP.exe package; for Linux kernels,  
users can reference the DUP.bin package.  
3
Before adding the MD1000 enclosure(s), make sure the MD3000i software is installed and up to date.  
For more information, refer to the Dell  
support.dell.com  
PowerVault  
MD3000i Support Matrix available on  
.
a
b
Install or update (to the latest version available on support.dell.com) the MD Storage Manager on  
each host server. Install or update (to the latest version available on support.dell.com) the  
multipath drivers on each host server. The multipath drivers are bundled with Modular Disk  
Storage Management install. On Windows systems, the drivers are automatically installed when a  
Full or Host selection is made.  
Using the MD Storage Manager, update the MD3000i RAID controller firmware to the latest  
version available on support.dell.com  
Controller Module Firmware) and the NVSRAM (Support  
RAID Controller Module NVSRAM).  
(
Support  
Download Firmware  
Download RAID  
Download  
Download Firmware  
4
5
Stop I/O and turn off all systems:  
a
b
c
Stop all I/O to the array and turn off affected host systems attached to the MD3000i.  
Turn off the MD3000i.  
Turn off the MD1000 enclosure(s).  
Referencing the applicable configuration for your rack (Figure 2-1 through Figure 2-5), cable the  
MD1000 enclosure(s) to the MD3000i.  
16  
Hardware Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
6
Turn on attached units:  
a
b
Turn on the MD1000 expansion enclosure(s). Wait for the enclosure status LED to light blue.  
Turn on the MD3000i and wait for the status LED to indicate that the unit is ready:  
If the status LEDs light a solid amber, the MD3000i is still coming online.  
If the status LEDs are blinking amber, there is an error that can be viewed using the  
MD Storage Manager.  
If the status LEDs light a solid blue, the MD3000i is ready.  
c
After the MD3000i is online and ready, turn on any attached host systems.  
7
After the MD1000 is configured as the expansion enclosure to the MD3000i, restore the data that was  
backed up in step 1.  
After they are online, the MD1000 enclosures are available for use within the MD3000i system.  
Expanding with New MD1000 Enclosures  
Perform the following steps to attach new MD1000 expansion enclosures to the MD3000i:  
1
Before adding the MD1000 enclosure(s), make sure the MD3000i software is installed and up to date.  
For more information, refer to the Dell PowerVault MD3000i Support Matrix available on  
support.dell.com  
.
a
b
c
Install or update (to the latest version available on support.dell.com) the MD Storage Manager on  
each host server.  
Install or update (to the latest version available on support.dell.com) the multipath drivers on  
each host server.  
Using the MD Storage Manager, update the MD3000i RAID controller firmware  
(
Support  
NVSRAM (Support  
Download Firmware  
Download RAID Controller Module Firmware) and the  
Download Firmware  
2
3
Stop I/O and turn off all systems:  
a
b
c
Stop all I/O to the array and turn off affected host systems attached to the MD3000i.  
Turn off the MD3000i.  
Turn off any MD1000 enclosures in the affected system.  
Referencing the applicable configuration for your rack (Figure 2-1 through Figure 2-5), cable the  
MD1000 enclosure(s) to the MD3000i.  
Hardware Installation  
17  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
4
Turn on attached units:  
a
b
Turn on the MD1000 expansion enclosure(s). Wait for the enclosure status LED to light blue.  
Turn on the MD3000i and wait for the status LED to indicate that the unit is ready:  
If the status LEDs light a solid amber, the MD3000i is still coming online.  
If the status LEDs are blinking amber, there is an error that can be viewed using the  
MD Storage Manager.  
If the status LEDs light a solid blue, the MD3000i is ready.  
c
After the MD3000i is online and ready, turn on any attached host systems.  
5
Using the MD Storage Manager, update all attached MD1000 firmware if it is out of date:  
a
b
Select Support  
Download Firmware  
Download Environmental (EMM) Card Firmware.  
Check the Select All check box so that all attached MD1000 enclosures are updated at the same  
time (each takes approximately 8 minutes to update).  
18  
Hardware Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Software Installation  
The MD3000i Resource CD contains all documentation pertinent to MD3000i hardware and  
MD Storage Manager software. It also includes software and drivers for both Linux and  
®
®
Microsoft Windows operating systems.  
The MD3000i Resource CD contains a readme.txt file covering changes to the software, updates,  
fixes, patches, and other important data applicable to both Linux and Windows operating systems.  
The readme.txt file also specifies requirements for accessing documentation, information regarding  
versions of the software on the CD, and system requirements for running the software.  
For more information on supported hardware and software for Dell™ PowerVault™ systems, refer to  
the DellPowerVaultMD3000i Support Matrix located at support.dell.com.  
Dell recommends installing all the latest updates available at support.dell.com.  
Use the following procedure to assemble and start your system for the first time:  
1
Install the NIC(s) in each host server that you attach to the MD3000i Storage Array, unless the  
NIC was factory installed. For general information on setting up your IP addresses, see Guidelines  
for Configuring Your Network for iSCSI.  
2
3
Cable the storage array to the host server(s), either directly or via a switch.  
Cable the Ethernet management ports on the storage array to either the management network  
(iSCSI-attached host server) or dedicated management station (non-iSCSI).  
4
5
Power on the storage array and wait for the status LED to turn blue.  
Start up each host server that is cabled to the storage array.  
Install the iSCSI Initiator Software (iSCSI-attached Host  
To configure iSCSI later in this document (see "Array Setup and iSCSI Configuration"), you must  
install the Microsoft iSCSI initiator on any host server that will access your storage array before you  
install the MD Storage Manager software.  
®
NOTE: Windows Server 2008 contains a built-in iSCSI initiator. If your system is running Windows  
Server 2008, you do not need to install the iSCSI initiator as shown in this section. Skip directly to "Installing  
MD Storage Software."  
Software Installation  
19  
Download from Www.Somanuals.com. All Manuals Search And Download.  
               
Depending on whether you are using a Windows Server 2003 operating system or a Linux operating  
system, refer to the following steps for downloading and installing the iSCSI initiator.  
Installing the iSCSI Initiator on a Windows Host Server  
1
Refer to the Dell  
PowerVault  
MD3000i Support Matrix on support.dell.com for the latest version  
and download location of the Microsoft iSCSI Software Initiator software.  
2
3
From the host server, download the iSCSI Initiator software.  
Once the installation begins and the Microsoft iSCSI Initiator Installation setup panel appears, select  
Initiator Service and Software Initiator  
.
4
5
DO NOT select Microsoft MPIO Multitpathing Support for iSCSI  
.
NOTICE: Make sure the Microsoft MPIO Multitpathing Support for iSCSI option is NOT selected. Using this option  
will cause the iSCSI initiator setup to function improperly.  
Accept the license agreement and finish the install.  
NOTE: If you are prompted to do so, reboot your system.  
Installing the iSCSI Initiator on a Linux Host Server  
Follow the steps in this section to install the iSCSI initiator on a Linux server.  
NOTE: All appropriate Linux iSCSI initiator patches are installed using the MD3000i Resource CD during MD  
Storage Manager Software installation.  
Installing the iSCSI Initiator on a RHEL 4 System  
®
®
You can install the iSCSI initiator software on Red Hat Enterprise Linux 4 systems either during or  
after operating system installation.  
To install the iSCSI initiator during RHEL 4 installation:  
1
When the Package Installation Defaults screen is displayed, select the Customize the set of Packages  
to be installed option. Click Next to go to the Package Group Selection screen.  
2
In the Servers list, select the Network Servers package and click Details to display a list of Network  
Server applications.  
3
4
Select the iscsi-initiator-utils - iSCSI daemon and utility programs option.  
Click OK, then Next to continue with the installation.  
To install the iSCSI initiator after RHEL 4 installation:  
1
From the desktop, click Applications  
Group Selection screen is displayed.  
System Settings  
Add Remove Applications. The Package  
2
In the Servers list, select the Network Servers package and click Details to display a list of Network  
Server applications.  
20  
Software Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
3
4
Select the iscsi-initiator-utils - iSCSI daemon and utility programs option.  
Click Close, then Update  
.
NOTE: Depending upon your installation method, the system will ask for the required source to install the  
package.  
Installing the iSCSI Initiator on a RHEL 5 System  
You can install the iSCSI initiator software on Red Hat Enterprise Linux 5 systems either during or after  
operating system installation. With this version of the Linux software, you can also elect to install the  
iSCSI initiator after the operating system installation via the command line.  
To install the iSCSI initiator during RHEL 5 installation:  
1
2
3
4
5
6
When the Package Installation Defaults screen is displayed, select the Customize now option.  
Click Next to go to the Package Group Selection screen.  
Select Base System, then select the Base option.  
Click Optional Packages  
.
Select the iscsi-initiator-utils option.  
Click OK, then Next to continue with the installation.  
To install the iSCSI initiator after RHEL 5 installation:  
1
From the desktop, select Applications  
displayed.  
Add/Remove Software. The Package Manager screen is  
2
3
4
5
In the Package Manager screen, select the Search tab.  
Search for iscsi-initiator-utils.  
When it is displayed, select the iscsi-initiator-utils option.  
Click Apply  
.
NOTE: Depending upon your installation method, the system will ask for the required source to install the  
package.  
NOTE: This method might not work if network access is not available to a Red Hat Network repository.  
To install the iSCSI initiator after RHEL 5 installation via the command line:  
1
2
Insert the RHEL 5 installation CD 1 or DVD. If your media is not automounted, you must manual  
mount it. The iscsi-initiatorutils.rpm file is located in the Server or Client subdirectory.  
Run the following command:  
rpm -i /path/to/media/Server/iscsi-initiatorutils.rpm  
Software Installation  
21  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Installing the iSCSI Initiator on a SLES 9 System  
®
You can install the iSCSI initiator software on SUSE Linux Enterprise Servers (SLES) 9 SP3 systems  
either during or after operating system installation.  
To install the iSCSI initiator during SLES 9 installation:  
1
2
3
4
At the YaST Installation Settings screen, click Change  
Click Software, then select Detailed Selection to see a complete list of packages.  
Select Various Linux Tools, then select linux-iscsi  
Click Accept  
If a dependencies window is displayed, click Continue and proceed with the installation.  
.
.
.
To install the iSCSI initiator after SLES 9 installation:  
1
2
3
4
5
6
From the Start menu, select System YaST  
.
Select Software, then Install and Remove Software  
.
In the Search box, enter linux-iscsi.  
When the linux-iscsi module is displayed, select it.  
Click on Check Dependencies to determine if any dependencies exist.  
If no dependencies are found, click Accept  
.
Installing the iSCSI Initiator on a SLES 10 SP1 System  
You can install the iSCSI initiator software on SUSE Linux Enterprise Server Version 10 systems either  
during or after operating system installation.  
To install the iSCSI initiator during SLES 10 SP1 installation:  
1
2
3
4
5
6
At the YaST Installation Settings screen, click Change.  
Click Software, then select Search  
In the Search box, enter iscsi.  
.
When the open-iscsi and yast2-iscsi-client modules are displayed, select them.  
Click Accept  
If a dialog box regarding dependencies appears, click Continue and proceed with installation.  
.
Installing the iSCSI initiator after SLES 10 SP1 installation:  
1
2
3
Select Desktop  
Select Search  
YaST  
Software  
Software Management.  
.
In the Search box, enter iscsi.  
22  
Software Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
4
5
When the open-iscsi and yast2-iscsi-client modules are displayed, select them.  
Click Accept  
.
Installing MD Storage Software  
The MD3000i Storage Software provides the host-based storage agent, multipath driver, and MD  
Storage Manager application used to operate and manage the storage array solution. The MD Storage  
Manager application is installed on a host server to configure, manage, and monitor the storage array.  
When installing from the CD, three installation types are available:  
Typical (Full installation) — This package installs both the management station and host server  
software. It includes the necessary host-based storage agent, multipath driver, and MD Storage  
Manager software. Select this option if you plan to use MD Storage Manager on the host server to  
configure, manage, and monitor the storage array.  
Management Station  
This package installs the MD Storage Manager software, which is needed  
to configure, manage, and monitor the storage array. Select this option if you plan to us MD Storage  
Manager to manage the storage array from a standalone system that is connected to the storage  
array only via the Ethernet management ports.  
Host — This package installs the necessary storage agent and multipath driver on a host server  
connected to the storage array. Select this option on all host servers that are connected to a storage  
array but will NOT use MD Storage Manager for any storage array management tasks.  
NOTE: Dell recommends using the Host installation type if the host server is running Windows Server 2008  
Core version.  
Installing MD Storage Software on an iSCSI-attached Host Server (Windows)  
To install MD Storage Manager on a Windows system, you must have administrative privileges to install  
MD Storage Manager files and program packages to the C:\Program Files\Dell\MD Storage Manager  
directory.  
NOTE: A minimum version of the Storport driver must be installed on the host server before installing the MD  
Storage Manager software. A hotfix with the minimum supported version of the Storport driver is located in the  
\windows\Windows_2003_2008\hotfixes directory on the MD3000i Resource CD. The MD Storage Manager  
installation will test for the minimum Storport version and will require you to install it before proceeding.  
Complete the following steps to install MD Storage Manager on an iSCSI-connected host server:  
1
2
Close all other programs before installing any new software.  
Insert the CD, if necessary, and navigate to the main menu.  
NOTE: If the host server is running Windows Server 2008 Core version, navigate to the CD drive and run the  
setup.bat utility.  
3
From the main menu, select Install MD3000i Storage Software.  
The Installation Wizard appears.  
Software Installation  
23  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
4
5
Click Next.  
Accept the terms of the License Agreement, and click Next  
.
The screen shows the default installation path.  
6
7
Click Next to accept the path, or enter a new path and click Next.  
Select an installation type:  
Typical (Full installation) — This package installs both the management station and host software.  
It includes the necessary host-based storage agent, multipath driver, and MD Storage Manager  
software. Select this option if you plan to use MD Storage Manager on the host server to configure,  
manage, and monitor the storage array.  
OR  
Host — This package installs the necessary storage agent and multipath driver on a host server  
connected to the storage array. Select this option on all hosts that are connected to a storage array  
but will NOT use MD Storage Manager for any storage array management tasks.  
NOTE: Dell recommends using the Host installation type if the host server is running Windows Server 2008  
Core version.  
8
9
Click Next.  
If the Overwrite Warning dialog appears, click OK. The software currently being installed  
automatically replaces any existing versions of MD Storage Manager.  
10 If you selected Typical (full) installation in step 6, a screen appears asking whether to restart the event  
monitor automatically or manually after rebooting. You should configure only one system (either a host  
or a management station) to automatically restart the event monitor.  
NOTE: The event monitor notifies the administrator of problem conditions with the storage array. MD Storage  
Manager can be installed on more than one system, but running the event monitor on multiple systems can  
cause multiple alert notifications to be sent for the same error condition. To avoid this issue, enable the event  
monitor only on a single system that monitors your storage arrays. For more information on alerts, the event  
monitor, and manually restarting the event monitor, see the User’s Guide.  
11 The Pre-Installation Summary screen appears, showing the installation destination, the required disk  
space, and the available disk space. If the installation path is correct, click Install  
.
12 When the installation completes, click Done.  
13 A screen appears asking if you want to restart the system now. Select No, I will restart my  
system myself.  
14 If you are setting up a cluster host, double-click the MD3000i Stand Alone to Cluster.reg file located  
in the windows\utility directory of the MD3000i Resource CD. This merges the file into the registry of  
each node.  
NOTE: Windows clustering is only supported on Windows Server 2003 and Windows Server 2008.  
24  
Software Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
If you are reconfiguring a cluster node into a stand alone host, double-click the MD3000i Cluster to  
Stand Alone.reg file located in the windows\utility directory of the MD3000i Resource CD. This  
merges the file into the host registry.  
NOTE: These registry files set the host up for the correct failback operation.  
15 If you have third-party applications that use the Microsoft Volume Shadow-copy Service (VSS) or Virtual  
Disk Service (VDS) Application Programming Interface (API), install the VDS_VSS package located in  
the windows\VDS_VSS directory on the MD3000i Resource CD. Separate versions for 32-bit and 64-bit  
operating systems are provided. The VSS and VDS provider will engage only if it is needed.  
16 Set the path for the command line interface (CLI), if required. See the MD Storage Manager CLI Guide  
for more information.  
17 Install MD Storage Manager on all other Windows hosts attached to the MD3000i array.  
18 If you have not yet cabled your MD3000i Storage Array, do so at this time.  
19 After the MD3000i has initialized, reboot each host attached to the array.  
NOTE: If you are not installing MD Storage Manager directly from the Resource CD (for example, if you are instead  
installing MD Storage Manager from a shared network drive), you must manually apply iSCSI updates to the  
Windows system registry. To apply these updates, go to the \windows\Windows_2003_2008\iSCSI_reg_changer  
directory on the Resource CD and run the iSCSi_reg_changer_Win2k3.bat or iSCSi_reg_changer_Win2k8.bat file.  
The iSCSI Initiator must be installed before you make these updates.  
Installing MD Storage Software on an iSCSI-attached Host Server (Linux)  
MD Storage Manager can be installed and used only on Linux distributions that utilize the RPM Package  
®
®
Manager format, such as Red Hat or SUSE . The installation packages are installed by default in the  
/opt/dell/mdstoragemanager directory.  
NOTE: Root privileges are required to install the software.  
Follow these steps to install MD Storage Manager software on an iSCSI-connected host server:  
1
2
Close all other programs before installing any new software.  
Insert the CD. For some Linux installations, when you insert a CD into a drive, a screen appears asking  
if you want to run the CD. Select Yes if the screen appears. Otherwise, double-click on the autorun  
script in the top directory or, from within a terminal window, run ./install.sh from the linux directory  
on the CD.  
NOTE: On RHEL 5 operating systems, CDs are automounted with the -noexec mount option. This option does  
not allow you to run any executable directly from the CD. To complete this step, you must unmount the CD,  
then manually remount it. Then, you can run these executables. The command to unmount the CD-ROM is:  
umount CD_device_node  
The command to manually mount CD is:  
mount CD_device_node mount_directory  
Software Installation  
25  
Download from Www.Somanuals.com. All Manuals Search And Download.  
             
3
At the CD main menu, type  
The installation wizard appears.  
Click Next  
2
and press Enter  
.
4
5
6
.
Accept the terms of the License Agreement and click Next  
.
Select an installation type:  
Typical (Full installation) — This package installs both the management station and host options.  
It includes the necessary host-based storage agent, multipath driver, and MD Storage Manager  
software. Select this option if you plan to use MD Storage Manager on the host server to configure,  
manage, and monitor the storage array.  
OR  
Host — This package installs the necessary storage agent and multipath driver on a host server  
connected to the storage array. Select this option on all hosts that are connected to a storage array  
but will NOT use MD Storage Manager for any storage array management tasks.  
7
8
If the Overwrite Warning dialog appears, click OK. The software currently being installed  
automatically replaces any existing versions of MD Storage Manager.  
9
The Multipath Warning dialog box may appear to advise that this installation requires an RDAC MPP  
driver. If this screen appears, click OK. Installation instructions for the RDAC MPP driver are given in  
step 13.  
10 If you selected Typical (full) installation in step 6, a screen appears asking whether to restart the event  
monitor automatically or manually after rebooting. You should configure only one system (either a host  
or a management station) to automatically restart the event monitor.  
NOTE: The event monitor notifies the administrator of problem conditions with the storage array. MD Storage  
Manager can be installed on more than one system, but running the event monitor on multiple systems can  
cause multiple alert notifications to be sent for the same error condition. To avoid this issue, enable the event  
monitor only on a single system which monitors your MD3000i arrays. For more information on alerts, the  
event monitor, and manually restarting the event monitor, see the User’s Guide.  
11 The Pre-Installation Summary screen appears showing the installation destination, the required disk  
space, and the available disk space. If the installation path is correct, click Install  
.
12 When the installation completes, click Done.  
13 At the install the multi-pathing driver [y/n]?prompt, answer  
y
(yes).  
14 When the RDAC driver installation is complete, quit the menu and restart the system.  
15 Install MD Storage Manager on all other hosts attached to the MD3000i array.  
16 Reboot each host attached to the array.  
26  
Software Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Installing a Dedicated Management Station (Windows and Linux)  
Optionally, you can manage your storage array over the network via a dedicated system attached to the  
array via the Ethernet management port. If you choose this option, follow these steps to install MD  
Storage Manager on that dedicated system.  
1
2
(Windows) From the CD main menu, select Install MD3000i Storage Software.  
(Linux) From the CD main menu, type  
The Installation Wizard appears.  
2
and press <Enter>.  
3
4
5
6
Click Next  
.
Accept the terms of the License Agreement and click Next  
.
Click Next to accept the default installation path (Windows), or enter a new path and click Next  
.
Select Management Station as the installation type. This option installs only the MD Storage  
Manager software used to configure, manage and monitor a MD3000i storage array.  
7
8
Click Next.  
If the Overwrite Warning dialog appears, click OK. The software currently being installed  
automatically replaces any existing versions of MD Storage Manager.  
9
A screen appears asking whether to restart the event monitor automatically or manually after  
rebooting. You should configure only one system (either a host or a management station) to  
automatically restart the event monitor.  
NOTE: The event monitor notifies the administrator of problem conditions with the storage array. MD Storage  
Manager can be installed on more than one system, but running the event monitor on multiple systems can  
cause multiple alert notifications to be sent for the same error condition. To avoid this issue, enable the event  
monitor only on a single system that monitors your MD3000i arrays. For more information on alerts, the event  
monitor, and manually restarting the event monitor, see the MD Storage Manager User’s Guide.  
10 The Pre-Installation Summary screen appears showing the installation destination, the required disk  
space, and the available disk space. If the installation path is correct, click Install  
.
11 When the installation completes, click Done.  
A screen appears asking if you want to restart the system now.  
12 Restart the system.  
13 Set the path for the command line interface (CLI), if required. See the MD Storage Manager CLI  
Guide for more information.  
Software Installation  
27  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Documentation for Windows Systems  
Viewing Resource CD Contents  
1
Insert the CD. If autorun is disabled, navigate to the CD and double-click setup.exe.  
NOTE: On a server running Windows Server 2008 Core version, navigate to the CD and run the setup.bat  
utility. Only the MD3000i Readme can be viewed on Windows Server 2008 Core versions. Other MD3000i  
documentation cannot be viewed or installed.  
A screen appears showing the following items:  
a
b
c
d
View MD3000i Readme  
Install MD3000i Storage Software  
Install MD3000i Documentation  
iSCSI Setup Instructions  
2
To view the readme.txt file, click the first bar.  
The readme.txt file appears in a separate window.  
3
4
Close the window after viewing the file to return to the menu screen.  
To view the manuals from the CD, open the HTML versions from the /docs/ folder on the CD.  
Installing the Manuals  
1
Insert the CD, if necessary, and select Install MD3000i Documentation in the main menu.  
A second screen appears.  
2
3
4
5
Click Next  
Accept the License Agreement and click Next  
Select the installation location or accept the default and click Next  
Click Install  
The installation process begins.  
.
.
.
.
6
7
When the process completes, click Finish to return to the main menu.  
To view the installed documents, go to My Computer and navigate to the installation location.  
NOTE: The MD3000i Documentation cannot be installed on Windows Server 2008 Core versions.  
28  
Software Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
             
Documentation for Linux Systems  
Viewing Resource CD Contents  
1
Insert the CD.  
For some Linux distributions, a screen appears asking if you want to run the CD. Select Yes if the  
screen appears. If no screen appears, execute ./install.sh within the linux folder on the CD.  
2
A menu screen appears showing the following items:  
1 View MD3000i Readme  
2 Install MD3000i Storage Software  
3 Install Multi-pathing Driver  
4 Install MD3000i Documentation  
5 View MD3000i Documentation  
6 iSCSI Setup Instructions  
7 Dell Support  
8 View End User License Agreement  
3
4
If you want to view the readme.txt file, type  
The file appears in a separate window. Close the window after viewing the file to return to the menu screen.  
To view another document, type and press <Enter>.  
1
and press <Enter>.  
5
A second menu screen appears with the following selections:  
MD3000i Owner's Manual  
MD3000i Installation Guide  
MD Storage Manager CLI Guide  
MD Storage Manager User's Guide  
NOTE: To view the documents from the CD, you must have a web browser installed on the system.  
Type the number of the document you want and press <Enter>.  
The document opens in a browser window.  
5
6
7
Close the document when you are finished. The system returns to the documentation menu described  
in step 4.  
Select another document or type  
menu screen.  
q
and press <Enter> to quit. The system returns to the main  
Software Installation  
29  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
Installing the Manuals  
1
2
Insert the CD, if necessary, and from the menu screen, type  
5
and press <Enter>.  
A screen appears showing the default location for installation. Press <Enter> to accept the path  
shown, or enter a different path and press <Enter>.  
3
4
When installation is complete, press any key to return to the main menu.  
To view the installed documents, open a browser window and navigate to the installation directory.  
30  
Software Installation  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
To use the storage array, you must configure iSCSI on both the host server(s) and the storage array.  
Step-by-step instructions for configuring iSCSI are described in this section. However, before  
proceeding here, you must have already installed the Microsoft iSCSI initiator and the MD Storage  
Manager software. If you have not, refer to Software Installation and complete those procedures  
before attempting to configure iSCSI.  
NOTE: Although some of these steps shown in this section can be performed in MD Storage Manager  
from a management station, the iSCSI initiator must be installed and configured on each host server.  
Before You Start  
Before you begin configuring iSCSI, you should fill out the iSCSI Configuration Worksheet  
(Table 4-2 and Table 4-3). Gathering this type of information about your network prior to starting  
the configuration steps should help you complete the process in less time.  
NOTE: If you are running Windows Server 2008 and elect to use IPv6, use Table 4-3 to define your  
settings on the host server and storage array controller iSCSI ports. IPv6 is not supported on storage  
array controller management ports.  
Terminology  
The table below outlines the terminology used in the iSCSI configuration steps later in this section.  
Table 4-1. Standard Terminology Used in iSCSI Configuration  
Term  
Definition  
CHAP (Challenge Handshake  
An optional security protocol used to control access to an iSCSI  
storage system by restricting use of the iSCSI data ports on both  
the host server and storage array. For more information on the  
types of CHAP authentication supported, see Understanding  
CHAP Authentication.  
Authentication Protocol)  
host or host server  
host server port  
A server connected to the storage array via iSCSI ports.  
iSCSI port on the host server used to connect it to the storage  
array.  
iSCSI initiator  
The iSCSI-specific software installed on the host server that  
controls communications between the host server and the storage  
array.  
Setting Up Your iSCSI Storage Array  
31  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
Table 4-1. Standard Terminology Used in iSCSI Configuration (continued)  
Term  
Definition  
iSCSI host port  
The iSCSI port (two per controller) on the storage array.  
iSNS (Microsoft Internet Storage An automated discovery, management and configuration tool  
Naming Service)  
used by some iSCSI devices.  
management station  
The system from which you manage your host server/storage array  
configuration.  
storage array  
target  
server.  
An iSCSI port on the storage array that accepts and responds to  
requests from the iSCSI initiator installed on the host server.  
iSCSI Configuration Worksheet  
The iSCSI Configuration Worksheet (Table 4-2 or Table 4-3) helps you plan your configuration.  
Recording host server and storage array IP addresses at a single location will help you configure your  
setup faster and more efficiently.  
Guidelines for Configuring Your Network for iSCSI provides general network setup guidelines for both  
Windows and Linux environments. It is recommended that you review these guidelines before  
completing the worksheet.  
32  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Table 4-2. iSCSI Configuration Worksheet (IPv4 settings)  
host server  
A
Mutual CHAP  
Secret  
cntl. 0  
cntl. 1  
MD3000i  
B
192.168.128.102  
(Mgmt network port)  
Target CHAP  
Secret  
192.168.130.101 (In 0 default)  
192.168.131.101 (In 1 default)  
192.168.131.102 (In 1 default)  
192.168.130.102 (In 0 default)  
192.168.128.101 (Mgmt network port)  
If you need additional space for more than one host server, use an additional sheet.  
Subnet  
(should be different for each NIC)  
A
Default gateway  
Static IP address (host server)  
iSCSI port 1  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
iSCSI port 2  
iSCSI port 3  
iSCSI port 4  
Management port  
Management port  
B
Static IP address (storage array)  
Subnet  
Default gateway  
____ . ____ . ____ . ____  
iSCSI controller 0, In 0  
iSCSI controller 0, In 1  
Management port, cntrl. 0  
iSCSI controller 1, In 0  
iSCSI controller 1, In 1  
Management port, cntrl. 1  
____ . ____ . ____ . ____ ____ . ____ . ____ . ____  
____ . ____ . ____ . ____ ____ . ____ . ____ . ____  
____ . ____ . ____ . ____ ____ . ____ . ____ . ____  
____ . ____ . ____ . ____ ____ . ____ . ____ . ____  
____ . ____ . ____ . ____ ____ . ____ . ____ . ____  
____ . ____ . ____ . ____ ____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
____ . ____ . ____ . ____  
Setting Up Your iSCSI Storage Array  
33  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Table 4-3. iSCSI Configuration Worksheet (IPv6 settings)  
host server  
A
B
Mutual CHAP  
Secret  
cntl. 0  
cntl. 1  
MD3000i  
Target CHAP  
Secret  
If you need additional space for more than one host server, use an additional sheet.  
Host iSCSI port 2  
Host iSCSI port 1  
A
________________________  
Link Local IP Address  
Routable IP Address  
Subnet Prefix  
Link Local IP Address  
Routable IP Address  
Subnet Prefix  
________________________  
________________________  
________________________  
________________________  
________________________  
________________________  
________________________  
Gateway  
Gateway  
iSCSI controller 0, In 0  
IP Address  
B
FE80 : 0000 : 0000 : 0000 : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
Routable IP Address 1  
Routable IP Address 2  
Router IP Address  
iSCSI controller 0, In 1  
IP Address  
FE80 : 0000 : 0000 : 0000 : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
Routable IP Address 1  
Routable IP Address 2  
Router IP Address  
iSCSI controller 1, In 0  
IP Address  
FE80 : 0000 : 0000 : 0000 : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
Routable IP Address 1  
Routable IP Address 2  
Router IP Address  
iSCSI controller 1, In 1  
IP Address  
FE80 : 0000 : 0000 : 0000 : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
____ : ____ : ____ : ____ : ____ : ____ : ____ : ____  
Routable IP Address 1  
Routable IP Address 2  
Router IP Address  
34  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Configuring iSCSI on Your Storage Array  
The following sections contains step-by-step instructions for configuring iSCSI on your storage array.  
However, before beginning, it is important to understand where each of these steps occur in relation to  
your host server/storage array environment.  
Table 4-4 below shows each specific iSCSI configuration step and where it occurs.  
Table 4-4. Host Server vs. Storage Array  
This step is performed on the HOST SERVER using  
the Microsoft or Linux iSCSI Initiator:  
This step is performed on the STORAGE ARRAY  
using MD Storage Manager:  
Step 1: Discover the storage array  
Step 2: Configure the iSCSI ports on the storage  
array  
Step 3: Perform target discovery from the iSCSI  
initiator  
Step 4: Configure host access  
Step 5: (Optional) Configure CHAP  
authentication on the storage array  
Step 6: (Optional) Configure CHAP  
authentication on the host server  
Step 7: Connect to the storage array from the host  
server  
Step 8: (Optional) Set up in-band management  
Using iSNS  
iSNS (Internet Storage Naming Service) Server, supported only on Windows iSCSI environments,  
eliminates the need to manually configure each individual storage array with a specific list of initiators  
and target IP addresses. Instead, iSNS automatically discovers, manages, and configures all iSCSI devices  
in your environment.  
For more information on iSNS, including installation and configuration, see www.microsoft.com.  
Setting Up Your iSCSI Storage Array  
35  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
Step 1: Discover the Storage Array (Out-of-band management only)  
Default Management Port Settings  
By default, the storage array management ports will be set to DHCP configuration. If the controller(s) on  
your storage array is unable to get IP configuration from a DHCP server, it will timeout after ten seconds  
and fall back to a default static IP address. The default IP configuration is:  
Controller 0: IP: 192.168.128.101 Subnet Mask: 255.255.255.0  
Controller 1: IP: 192.168.128.102 Subnet Mask: 255.255.255.0  
NOTE: No default gateway is set.  
NOTE: If DHCP is not used, initial configuration of the management station must be performed on the same  
physical subnet as the storage array. Additionally, during initial configuration, at least one network adapter  
must be configured on the same IP subnet as the storage array’s default management port (192.168.128.101 or  
192.168.128.102). After initial configuration (management ports are configured using MD Storage Manager),  
the management station’s IP address can be changed back to its previous settings.  
NOTE: This procedure applies to out-of-band management only. If you choose to set up in-band  
management, you must complete this step and then refer to Step 8: (Optional) Set Up In-Band Management.  
You can discover the storage array automatically or manually. Choose one and complete the steps below.  
Automatic Storage Array Discovery  
1
Launch MD Storage Manager.  
If this is the first storage array to be set up, the Add New Storage Array window appears.  
2
Choose Automatic and click OK  
.
It may take several minutes for the discovery process to complete. Closing the discovery status window  
before the discovery process completes will cancel the discovery process.  
After discovery is complete, a confirmation screen appears. Click Close to close the screen.  
Manual Storage Array Discovery  
1
Launch MD Storage Manager.  
If this is the first storage array to be set up, the Add New Storage Array window appears.  
2
3
Select Manual and click OK.  
Select Out-of-band management and enter the host server name(s) or IP address(es) of the iSCSI  
storage array controller.  
4
Click Add.  
Out-of-band management should now be successfully configured.  
After discovery is complete, a confirmation screen appears. Click Close to close the screen.  
36  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
Set Up the Array  
1
2
3
When discovery is complete, the name of the first storage array found appears under the Summary tab  
in MD Storage Manager.  
The default name for the newly discovered storage array is Unnamed. If another name appears, click  
the down arrow next to that name and choose Unnamed in the drop-down list.  
Click the Initial Setup Tasks option to see links to the remaining post-installation tasks. For more  
information about each task, see the User’s Guide. Perform these tasks in the order shown in Table 4-5.  
NOTE: Before configuring the storage array, check the status icons on the Summary tab to make sure the  
enclosures in the storage array are in an Optimal status. For more information on the status icons,  
see Troubleshooting Tools.  
Table 4-5. Initial Storage Array Setup Tasks  
Task  
Purpose  
Information Needed  
Rename the storage array.  
To provide more a meaningful  
name than the software-assigned  
label of Unnamed.  
A unique, clear name with no more  
than 30 characters that may include  
letters, numbers, and no special  
characters other than underscore (_),  
minus (–), or pound sign (#).  
NOTE: If you need to physically  
find the device, click Blink the  
storage array on the Initial Setup  
Tasks dialog box or click the Tools  
tab and choose Blink. Lights on the  
front of the storage array blink  
intermittently to identify the array.  
Dell recommends blinking storage  
arrays to ensure that you are  
NOTE: MD Storage Manager does not  
check for duplicate names. Names are  
not case sensitive.  
working on the correct enclosure.  
Set a storage array password.  
To restrict unauthorized access,  
MD Storage Manager asks for a  
password before changing the  
configuration or performing a  
destructive operation.  
A case-sensitive password that meets  
the security requirements of your  
enterprise.  
Setting Up Your iSCSI Storage Array  
37  
Download from Www.Somanuals.com. All Manuals Search And Download.  
             
Table 4-5. Initial Storage Array Setup Tasks (continued)  
Task  
Purpose  
Information Needed  
Set the management port IP  
addresses on each controller.  
To set the management port IP  
addresses to match your public  
network configuration. Although  
DHCP is supported, static IP  
addressing is recommended.  
In MD Storage Manager, select  
Initial Setup TasksConfigure  
Ethernet Management Ports, then  
specify the IP configuration for each  
management port on the storage  
array controllers.  
NOTE: If you change a management  
port IP address, you may need to  
update your management station  
configuration and/or repeat storage  
array discovery.  
Set up alert notifications.  
Set up e-mail alerts.  
To arrange to notify individuals (by E-mail — Sender (sender’s SMTP  
e-mail) and/or storage management gateway and e-mail address) and  
stations (by SNMP) when a storage recipients (fully qualified e-mail  
array component degrades or fails, addresses)  
Set up SNMP alerts.  
or an adverse environmental  
SNMP — (1) A community name, a  
NOTE: The Status area in the  
Summary tab shows if alerts have  
been set for the selected array.  
condition occurs.  
known set of storage management  
stations set by administrator as an  
ASCII string in the management  
console (default: "public"), and (2)  
a trap destination, IP address or host  
name of a management console  
running an SNMP service  
38  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Step 2: Configure the iSCSI Ports on the Storage Array  
By default, the iSCSI ports on the storage array are set to the following IPv4 settings:  
Controller 0, Port 0: IP: 192.168.130.101 Subnet Mask: 255.255.255.0 Port: 3260  
Controller 0, Port 1: IP: 192.168.131.101 Subnet Mask: 255.255.255.0 Port: 3260  
Controller 1, Port 0: IP: 192.168.130.102 Subnet Mask: 255.255.255.0 Port: 3260  
Controller 1, Port 1: IP: 192.168.131.102 Subnet Mask: 255.255.255.0 Port: 3260  
NOTE: No default gateway is set.  
To configure the iSCSI ports on the storage array, complete the following steps:  
1
2
From MD Storage Manager, click the iSCSI tab, then select Configure iSCSI Host Ports  
Configure the iSCSI ports on the storage array.  
.
NOTE: Using static IPv4 addressing is recommended, although DHCP is supported.  
NOTE: IPv4 is enabled by default on the iSCSI ports. You must enable IPv6 to configure IPv6 addresses.  
NOTE: IPv6 is supported only on controllers that will connect to host servers running Windows Server 2008.  
The following settings are available (depending on your specific configuration) by clicking the  
Advanced button:  
Virtual LAN (VLAN) support  
A VLAN is a network of different systems that behave as if they are connected to the same  
segments of a local area network (LAN) and are supported by the same switches and routers.  
When configured as a VLAN, a device can be moved to another location without being  
reconfigured. To use VLAN on your storage array, obtain the VLAN ID from your network  
administrator and enter it here.  
Ethernet priority  
This parameter is set to determine a network access priority.  
TCP listening port  
The port number the controller on the storage array uses to listen for iSCSI logins from host server  
iSCSI initiators.  
NOTE: The TCP listening port for the iSNS server is the port number the storage array controller uses to  
connect to an iSNS server. This allows the iSNS server to register the iSCSI target and portals of the storage  
array so that the host server initiators can identify them.  
Jumbo frames  
Jumbo Ethernet frames are created when the maximum transmission units (MTUs) are larger than  
1500 bytes per frame. This setting is adjustable port-by-port.  
3
4
5
To enable ICMP PING responses for all ports, select Enable ICMP PING responses  
Click OK when all iSCSI storage array port configurations are complete.  
.
Test the connection by performing a ping command on each iSCSI storage array port.  
Setting Up Your iSCSI Storage Array  
39  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Step 3: Perform Target Discovery from the iSCSI Initiator  
This step identifies the iSCSI ports on the storage array to the host server. Select the set of steps in one  
of the following sections (Windows or Linux) that corresponds to your operating system.  
If you are using Windows Server 2003 or Windows Server 2008 GUI version  
1
Click Start  
Tools iSCSI Initiator  
Programs  
Microsoft iSCSI Initiator or Start  
All Programs  
Administrative  
.
2
3
Click the Discovery tab.  
Under Target Portals, click Add and enter the IP address or DNS name of the iSCSI port on the  
storage array.  
4
5
If the iSCSI storage array uses a custom TCP port, change the Port number. The default is 3260.  
Click Advanced and set the following values on the General tab:  
Local Adapter: Must be set to Microsoft iSCSI Initiator.  
Source IP: The source IP address of the host you want to connect with.  
Data Digest and Header Digest: Optionally, you can specify that a digest of data or header  
information be compiled during transmission to assist in troubleshooting.  
CHAP logon information: Leave this option unselected and do not enter CHAP information  
at this point, unless you are adding the storage array to a SAN that has target CHAP already  
configured.  
NOTE: IPSec is not supported.  
Click OK to exit the Advanced menu, and OK again to exit the Add Target Portals screen.  
6
To exit the Discovery tab, click OK.  
If you plan to configure CHAP authentication, do not perform discovery on more than one iSCSI  
port at this point. Stop here and go to the next step, Step 4: Configure Host Access.  
If you do not plan to configure CHAP authentication, repeat step 1 thorough step 6 (above) for all  
iSCSI ports on the storage array.  
If you are using Windows Server 2008 Core Version  
1
2
3
Set the iSCSI initiator service to start automatically:  
sc \\<server_name> config msiscsi start= auto  
Start the iSCSI service:  
sc start msiscsi  
Add a target portal:  
iscsicli QAddTargetPortal <IP_address_of_iSCSI_port_on_storage  
array>  
40  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
If you are using Linux Server  
®
®
®
Configuration of the iSCSI initiator for Red Hat Enterprise Linux version 4 and SUSE Linux  
Enterprise Server 9 distributions is performed by modifying the /etc/iscsi.conf file, which is installed by  
default when you install MD Storage Manager. You can edit the file directly, or replace the default file  
with a sample file included on the MD3000i Resource CD.  
To use the sample file included on the CD:  
1
2
3
4
Save the default /etc/iscsi.conf file by naming it to another name of your choice.  
Copy the appropriate sample file from /linux/etc on the CD to /etc/iscsi.conf  
Rename the sample file to iscsi.conf  
.
.
Edit the iscsi.conf file and replace the IP address entries shown for DiscoveryAddress=with the  
IP addresses assigned to the iSCSI ports on your storage array:  
For example, if your MD3000i has two iSCSI controllers (four iSCSI ports), you will need to add four  
IP addresses:  
DiscoveryAddress=<your_storage_array_IP_address>  
DiscoveryAddress=<your_storage_array_IP_address>  
DiscoveryAddress=<your_storage_array_IP_address>  
DiscoveryAddress=<your_storage_array_IP_address>  
If you elect not to use the sample file on the CD, edit the existing default /etc/iscsi.conf file as shown  
in the previous example.  
5
Edit (or add) the following entries to the /etc/iscsi.conf file:  
HeaderDigest=never  
DataDigest=never  
LoginTimeout=15  
IdleTimeout=15  
PingTimeout=5  
ConnFailTimeout=144  
AbortTimeout=10  
ResetTimeout=30  
Continuous=no  
InitialR2T=no  
ImmediateData=yes  
MaxRecvDataSegmentLength=65536  
Setting Up Your iSCSI Storage Array  
41  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
FirstBurstLength=262144  
MaxBurstLength=16776192  
6
7
Restart the iSCSI daemon by executing the following command from the console:  
/etc/init.d/iscsi restart  
Verify that the server can connect to the storage array by executing this command from a console:  
iscsi -ls  
If successful, an iSCSI session has been established to each iSCSI port on the storage array. Sample  
output from the command should look similar to this:  
********************************************************************  
SFNet iSCSI Driver Version ...4:0.1.11-3(02-May-2006)  
********************************************************************  
TARGET NAME  
TARGET ALIAS  
HOST ID  
: iqn.1984-05.com.dell:powervault.6001372000f5f0e600000000463b9292  
:
: 2  
BUS ID  
: 0  
TARGET ID  
: 0  
TARGET ADDRESS  
SESSION STATUS  
SESSION ID  
: 192.168.0.110:3260,1  
: ESTABLISHED AT Wed May 9 18:20:27 CDT 2007  
: ISID 00023d000001 TSIH 5  
********************************************************************  
TARGET NAME  
TARGET ALIAS  
HOST ID  
: iqn.1984-05.com.dell:powervault.6001372000f5f0e600000000463b9292  
:
: 3  
BUS ID  
: 0  
TARGET ID  
: 0  
TARGET ADDRESS  
SESSION STATUS  
SESSION ID  
: 192.168.0.111:3260,1  
: ESTABLISHED AT Wed May 9 18:20:28 CDT 2007  
: ISID 00023d000002 TSIH 4  
If you are using RHEL 5 or SLES 10 SP1  
Configuration of the iSCSI initiator for RHEL version 5 and SLES 10 SP1 distributions is done by  
modifying the /etc/iscsi/iscsid.conf file, which is installed by default when you install MD Storage  
Manager. You can edit the file directly, or replace the default file with a sample file included on the  
MD3000i Resource CD.  
To use the sample file included on the CD:  
1
2
3
Save the default /etc/iscsi/iscsid.conf file by naming it to another name of your choice.  
Copy the appropriate sample file from /linux/etc on the CD to /etc/iscsi/iscsid.conf  
.
Rename the sample file to iscsid.conf  
.
42  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
4
Edit the following entries in the /etc/iscsi/iscsid.conf file:  
a
b
Edit (or verify) that the node.startup = manual line is disabled.  
Edit (or verify) that the node.startup = automaticline is enabled. This will enable  
automatic startup of the service at boot time.  
c
Verify that the following time-out value is set to 144:  
node.session.timeo.replacement_timeout = 144  
d
Save and close the /etc/iscsi/iscsid.conf file.  
5
6
7
8
From the console, restart the iSCSI service with the following command:  
service iscsi start  
Verify that the iSCSI service is running during boot using the following command from the console:  
chkconfig iscsi on  
To display the available iscsi targets at the specified IP address, use the following command:  
iscsiadm –m discovery –t st -p <IP_address_of_iSCSI_port>  
After target discovery, use the following command to manually login:  
iscsiadm -m node –l  
This logon will be performed automatically at startup if automatic startup is enabled.  
9
Manually log out of the session using the following command:  
iscsiadm -m node -T <initiator_username> -p <target_ip> -u  
Setting Up Your iSCSI Storage Array  
43  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Step 4: Configure Host Access  
This step specifies which host servers will access virtual disks on the storage array. You should perform  
this step:  
before mapping virtual disks to host servers  
any time you connect new host servers to the storage array  
1
2
3
Launch MD Storage Manager.  
Click on the Configure tab, then select Configure Host Access (Manual)  
.
At Enter host name, enter the host server to be available to the storage array for virtual disk mapping.  
This can be an informal name, not necessarily a name used to identify the host server to the network.  
4
5
In the Select host type drop-down menu, select the host type.  
Click Next  
.
If your iSCSI initiator shows up in the list of Known iSCSI initiators, make sure it is highlighted and  
click Add and then Next. Otherwise, click New and enter the iSCSI initiator name  
.
In Windows, the iSCSI initiator name can be found on the General tab of the iSCSI Initiator  
Properties window.  
In Linux, the iSCSI initiator name can be found in the etc/initiatorname.iscsi file or by using the  
iscsi-iname command.  
Click Next  
.
6
7
Choose whether or not the host server will be part of a host server group that will share access to the  
same virtual disks as other host servers. Select Yes only if the host is part of a Microsoft cluster.  
Click Next  
.
Click Finish  
.
44  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Understanding CHAP Authentication  
Before proceeding to either Step 5: Configure CHAP Authentication on the Storage Array (optional) or  
Step 6: Configure CHAP Authentication on the Host Server (optional), it would be useful to gain an  
overview of how CHAP authentication works.  
What is CHAP?  
Challenge Handshake Authentication Protocol (CHAP) is an optional iSCSI authentication method  
where the storage array (target) authenticates iSCSI initiators on the host server. Two types of CHAP are  
supported: target CHAP and mutual CHAP.  
Target CHAP  
In target CHAP, the storage array authenticates all requests for access issued by the iSCSI initiator(s) on  
the host server via a CHAP secret. To set up target CHAP authentication, you enter a CHAP secret on  
the storage array, then configure each iSCSI initiator on the host server to send that secret each time it  
attempts to access the storage array.  
Mutual CHAP  
In addition to setting up target CHAP, you can set up mutual CHAP in which both the storage array and  
the iSCSI initiator authenticate each other. To set up mutual CHAP, you configure the iSCSI initiator  
with a CHAP secret that the storage array must send to the host sever in order to establish a connection.  
In this two-way authentication process, both the host server and the storage array are sending  
information that the other must validate before a connection is allowed.  
CHAP is an optional feature and is not required to use iSCSI. However, if you do not configure CHAP  
authentication, any host server connected to the same IP network as the storage array can read from and  
write to the storage array.  
NOTE: If you elect to use CHAP authentication, you should configure it on both the storage array (using MD  
Storage Manager) and the host server (using the iSCSI initiator) before preparing virtual disks to receive data.  
If you prepare disks to receive data before you configure CHAP authentication, you will lose visibility to the  
disks once CHAP is configured.  
Setting Up Your iSCSI Storage Array  
45  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
CHAP Definitions  
To summarize the differences between target CHAP and mutual CHAP authentication, see Table 4-6.  
Table 4-6. CHAP Types Defined  
CHAP Type  
Description  
Target CHAP  
Sets up accounts that iSCSI initiators use to connect to the target  
storage array. The target storage array then authenticates the iSCSI  
initiator.  
that a target storage array uses to connect to an iSCSI initiator. The  
iSCSI initiator then authenticates the target.  
How CHAP Is Set Up  
The next two steps in your iSCSI configuration, Step 5: Configure CHAP Authentication on the Storage  
Array (optional) and Step 6: Configure CHAP Authentication on the Host Server (optional), offer step-by-  
step procedures for setting up CHAP on your storage array and host server.  
46  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
If you are configuring CHAP authentication of any kind (either target-only or target and mutual), you  
must complete this step and Step 6: Configure CHAP Authentication on the Host Server (optional).  
If you are not configuring any type of CHAP, skip these steps and go to Step 7: Connect to the Target  
Storage Array from the Host Server.  
NOTE: If you choose to configure mutual CHAP authentication, you must first configure target CHAP.  
Remember, in terms of iSCSI configuration, the term target always refers to the storage array.  
Configuring Target CHAP Authentication on the Storage Array  
1
From MD Storage Manager, click the iSCSI tab, then Change Target Authentication.  
Make a selection based on the following:  
Table 4-7. CHAP Settings  
Selection  
Description  
None  
This is the default selection. If None is the only selection, the storage array will  
allow an iSCSI initiator to log on without supplying any type of CHAP  
authentication.  
None and  
CHAP  
The storage array will allow an iSCSI initiator to log on with or without CHAP  
authentication.  
CHAP  
If CHAP is selected and None is deselected, the storage array will require CHAP  
authentication before allowing access.  
2
3
To configure a CHAP secret, select CHAP and select CHAP Secret.  
Enter the Target CHAP secret (or Generate Random Secret), confirm it in Confirm Target CHAP  
Secret and click OK  
.
Although the storage array allows sizes from 12 to 57 characters, many initiators only support CHAP  
secret sizes up to 16 characters (128-bit).  
NOTE: Once entered, a CHAP secret is not retrievable. Make sure you record the secret in an accessible  
place. If Generate Random Secret is used, copy and past the secret into a text file for future reference since  
the same CHAP secret will be used to authenticate any new host servers you may add to the storage array. If  
you forget this CHAP secret, you must disconnect all existing hosts attached to the storage array and repeat  
the steps in this chapter to re-add them.  
4
Click OK.  
Setting Up Your iSCSI Storage Array  
47  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Configuring Mutual CHAP Authentication on the Storage Array  
The initiator secret must be unique for each host server that connects to the storage array and must not  
be the same as the target CHAP secret.  
1
From MD Storage Manager, click on the iSCSI tab, then select Enter Mutual Authentication  
Permissions  
.
2
3
Select an initiator on the host server and click the CHAP Secret  
.
Enter the Initiator CHAP secret, confirm it in Confirm initiator CHAP secret, and click OK  
.
NOTE: In some cases, an initiator CHAP secret may already be defined in your configuration. If so, use it  
here.  
4
Click Close  
.
NOTE: To remove a CHAP secret, you must delete the host initiator and re-add it.  
48  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
(optional)  
If you configured CHAP authentication in Step 5: Configure CHAP Authentication on the Storage Array  
(optional), complete the following steps. If not, skip to Step 7: Connect to the Target Storage Array from  
the Host Server.  
Select the set of steps in one of the following sections (Windows or Linux) that corresponds to your  
operating system.  
If you are using Windows Server 2003 or Windows Server 2008 GUI version  
1
Click Start  
Tools iSCSI Initiator  
Programs  
Microsoft iSCSI Initiator or Start  
All Programs  
Administrative  
.
2
If you are NOT using mutual CHAP authentication:  
skip to the step 4 below  
If you are using mutual CHAP authentication:  
3
click the General tab  
select Secret  
at Enter a secure secret, enter the mutual CHAP secret you entered for the storage array  
4
5
Click the Discovery tab.  
Under Target Portals, select the IP address of the iSCSI port on the storage array and click Remove  
.
The iSCSI port you configured on the storage array during target discovery should disappear. You will  
reset this IP address under CHAP authentication in the steps that immediately follow.  
6
7
Under Target Portals, click Add and re-enter the IP address or DNS name of the iSCSI port on the  
storage array (removed above).  
Click Advanced and set the following values on the General tab:  
Local Adapter: Should always be set to Microsoft iSCSI Initiator.  
Source IP: The source IP address of the host you want to connect with.  
Data Digest and Header Digest: Optionally, you can specify that a digest of data or header  
information be compiled during transmission to assist in troubleshooting.  
CHAP logon information: Enter the target CHAP authentication username and secret you  
entered (for the host server) on the storage array.  
Perform mutual authentication: If mutual CHAP authentication is configured, select this  
option.  
NOTE: IPSec is not supported.  
Setting Up Your iSCSI Storage Array  
49  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
8
Click OK.  
If discovery session failover is desired, repeat step 5 and step 6 (in this step) for all iSCSI ports on the  
storage array. Otherwise, single-host port configuration is sufficient.  
NOTE: If the connection fails, make sure that all IP addresses are entered correctly. Mistyped IP addresses  
are a common cause of connection problems.  
If you are using Windows Server 2008 Core Version  
1
Set the iSCSI initiator services to start automatically (if not already set):  
sc \\<server_name> config msiscsi start= auto  
2
Start the iSCSI service (if necessary):  
sc start msiscsi  
3
4
If you are not using mutual CHAP authentication, skip to step 4.  
Enter the mutual CHAP secret you entered for the storage array:  
iscsicli CHAPSecret <secret>  
5
6
Remove the target portal that you configured on the storage array during target discovery:  
iscsicli RemoveTargetPortal <IP_address> <TCP_listening_port>  
You will reset this IP address under CHAP authentication in the following steps.  
Add the target portal with CHAP defined:  
iscsicli QAddTargetPortal  
<IP_address_of_iSCSI_port_on_storage_array> [CHAP_username]  
[CHAP_password]  
where  
[CHAP_username]is the initiator name  
[CHAP_password]is the target CHAP secret  
If discovery session failover is desired, repeat step 5 for all iSCSI ports on the storage array. Otherwise,  
If you are using Linux Server  
1
Edit the /etc/iscsi.conf file to add an OutgoingUsername=and OutgoingPassword=entry  
after each DiscoveryAddress=entry. OutgoingUsernameis the iSCSI initiator name entered  
in Step 4: Configure Host Access, and the OutgoingPasswordis the CHAP secret created in Step  
5: Configure CHAP Authentication on the Storage Array (optional).  
For example, your edited /etc/iscsi.conf file might look like this:  
DiscoveryAddress=172.168.10.6  
OutgoingUsername=iqn.1987-05.com.cisco:01.742b2d31b3e  
50  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
OutgoingPassword=0123456789abcdef  
DiscoveryAddress=172.168.10.7  
OutgoingUsername=iqn.1987-05.com.cisco:01.742b2d31b3e  
OutgoingPassword=0123456789abcdef  
DiscoveryAddress=172.168.10.8  
OutgoingUsername=iqn.1987-05.com.cisco:01.742b2d31b3e  
OutgoingPassword=0123456789abcdef  
DiscoveryAddress=172.168.10.9  
OutgoingUsername=iqn.1987-05.com.cisco:01.742b2d31b3e  
OutgoingPassword=0123456789abcdef  
If you are using Mutual CHAP authentication on Linux Server  
If you are configuring Mutual CHAP authentication in Linux, you must also add an  
IncomingUsername=and IncomingPassword=entry after each OutgoingPassword=entry.  
The IncomingUsername is the iSCSI target name, which can be viewed in MD Storage Manager by  
accessing the iSCSI tab and clicking Change Target Identification.  
For example, your edited /etc/iscsi.conf file might look like this:  
DiscoveryAddress=172.168.10.6  
OutgoingUsername=iqn.1987-05.com.cisco:01.742b2d31b3e  
OutgoingPassword=0123456789abcdef  
IncomingUsername=iqn.1984-05.com.dell:powervault.6001372000f5f0e600000000463b9292  
IncomingPassword=abcdef0123456789  
DiscoveryAddress=172.168.10.7  
OutgoingUsername=iqn.1987-05.com.cisco:01.742b2d31b3e  
OutgoingPassword=0123456789abcdef  
IncomingUsername=iqn.1984-05.com.dell:powervault.6001372000f5f0e600000000463b9292  
IncomingPassword=abcdef0123456789  
DiscoveryAddress=172.168.10.8  
OutgoingUsername=iqn.1987-05.com.cisco:01.742b2d31b3e  
OutgoingPassword=0123456789abcdef  
IncomingUsername=iqn.1984-05.com.dell:powervault.6001372000f5f0e600000000463b9292  
IncomingPassword=abcdef0123456789  
DiscoveryAddress=172.168.10.9  
OutgoingUsername=iqn.1987-05.com.cisco:01.742b2d31b3e  
OutgoingPassword=0123456789abcdef  
IncomingUsername=iqn.1984-05.com.dell:powervault.6001372000f5f0e600000000463b9292  
IncomingPassword=abcdef0123456789  
Setting Up Your iSCSI Storage Array  
51  
Download from Www.Somanuals.com. All Manuals Search And Download.  
If you are using RHEL 5 or SLES 10 SP1  
1
To enable CHAP (optional), the following line needs to be enabled in your /etc/iscsi/iscsid.conf file.  
node.session.auth.authmethod = CHAP  
2
To set a username and password for CHAP authentication of the initiator by the target(s), edit the  
following lines as shown:  
node.session.auth.username = <iscsi_initiator_username>  
node.session.auth.password = <CHAP_initiator_password>  
3
If you are using Mutual CHAP authentication, you can set the username and password for CHAP  
authentication of the target(s) by the initiator by editing the following lines:  
node.session.auth.username_in= <iscsi_target_username>  
node.session.auth.password_in = <CHAP_target_password>  
4
5
To set up discovery session CHAP authentication, first uncomment the following line:  
discovery.sendtargets.auth.authmethod = CHAP  
Set a username and password for a discovery session CHAP authentication of the initiator by the  
target(s) by editing the following lines:  
discovery.sendtargets.auth.username = <iscsi_initiator_username>  
discovery.sendtargets.auth.password = <CHAP_initiator_password>  
6
7
To set the username and password for discovery session CHAP authentication of the target(s) by the  
initiator for Mutual CHAP, edit the following lines:  
discovery.sendtargets.auth.username = <iscsi_target_username>  
discovery.sendtargets.auth.password_in = <CHAP_target_password>  
As a result of steps 1 through 6, the final configuration contained in the /etc/iscsi/iscsid.conf file might  
look like this:  
node.session.auth.authmethod = CHAP  
node.session.auth.username = iqn.2005-03.com.redhat01.78b1b8cad821  
node.session.auth.password = password_1  
node.session.auth.username_in= iqn.1984-  
05.com.dell:powervault.123456  
node.session.auth.password_in = test1234567890  
discovery.sendtargets.auth.authmethod = CHAP  
discovery.sendtargets.auth.username = iqn.2005-  
03.com.redhat01.78b1b8cad821  
52  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
discovery.sendtargets.auth.password = password_1  
discovery.sendtargets.auth.username = iqn.1984-  
05.com.dell:powervault.123456  
discovery.sendtargets.auth.password_in = test1234567890  
If you are using SLES10 SP1 via the GUI  
1
2
3
4
5
6
7
Select Desktop  
Click Service Start, then select When Booting  
Select Discovered Targets, then select Discovery  
Enter the IP address of the port.  
Click Next  
YaST  
iSCSI Initiator.  
.
.
.
Select any target that is not logged in and click Log in  
.
Choose one:  
If you are not using CHAP authentication, select No Authentication. Proceed to step 8.  
or  
If you are using CHAP authentication, enter the CHAP username and password. To enable  
Mutual CHAP, select and enter the Mutual CHAP username and password.  
8
9
Repeat step 7 for each target until at least one connection is logged in for each controller.  
Go to Connected Targets  
.
10 Verify that the targets are connected and show a status of true.  
Setting Up Your iSCSI Storage Array  
53  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Step 7: Connect to the Target Storage Array from the Host Server  
If you are using Windows Server 2003 or Windows Server 2008 GUI  
1
Click Start  
Tools iSCSI Initiator  
Programs  
Microsoft iSCSI Initiator or Start  
All Programs  
Administrative  
.
2
Click the Targets tab.  
If previous target discovery was successful, the iqn of the storage array should be displayed under  
Targets  
.
3
4
5
6
Click Log On  
Select Automatically restore this connection when the system boots  
Select Enable multi-path  
Click Advanced and configure the following settings under the General tab:  
.
.
.
Local Adapter: Must be set to Microsoft iSCSI Initiator.  
Source IP: The source IP address of the host server you want to connect from.  
Target Portal: Select the iSCSI port on the storage array controller that you want to  
connect to.  
Data Digest and Header Digest: Optionally, you can specify that a digest of data or header  
information be compiled during transmission to assist in troubleshooting.  
CHAP logon information: If CHAP authentication is required, select this option and enter  
the Target secret  
.
Perform mutual authentication: If mutual CHAP authentication is configured, select this  
option.  
NOTE: IPSec is not supported.  
7
Click OK  
.
To support storage array controller failover, the host server must be connected to at least one iSCSI  
port on each controller. Repeat step 3 through step 8 for each iSCSI port on the storage array that you  
want to establish as failover targets (the Target Portal address will be different for each port you  
connected to).  
NOTE: To enable the higher throughput of multipathing I/O, the host server must connect to both iSCSI ports  
on each controller, ideally from separate host-side NICs. Repeat step 3 through step 7 for each iSCSI port on  
each controller. If using a duplex MD3000i configuration, then LUNs should also be balanced between the  
controllers.  
The Status field on the Targets tab should now display as Connected  
Click OK to close the Microsoft iSCSI initiator.  
.
8
NOTE: MD3000i supports only round robin load-balancing policies.  
54  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
If you are using Windows Server 2008 Core Version  
1
2
3
Set the iSCSI initiator services to start automatically (if not already set):  
sc \\<server_name> config msiscsi start= auto  
Start the iSCSI service (if necessary):  
sc start msiscsi  
Log on to the target:  
iscsicli PersistentLoginTarget <Target_Name> <Report_To_PNP>  
<Target_Portal_Address> <TCP_Port_Number_Of_Target_Portal> * * *  
<Login_Flags> * * * * * <Username> <Password> <Authtype> *  
<Mapping_Count>  
where  
<Target_Name>is the target name as displayed in the target list. Use the iscsicli  
ListTargetscommand to display the target list.  
<Report_To_PNP>is T, which exposes the LUN to the operating system as a storage device.  
<Target_Portal_Address>is the IP address of the iSCSI port on the controller being logged in  
to.  
<TCP_Port_Number_Of_Target_Portal>is 3260.  
<Login_Flags>is 0x2to enable multipathing for the target on the initiator. This value allows more  
than one session to be logged in to a target at one time.  
<Username>is the initiator name.  
<Password>is the target CHAP secret.  
<Authtype>is either  
0
for no authentication,  
1
for Target CHAP, or  
2
for Mutual CHAP.  
NOTE: <Username>, <Password> and <Authtype> are optional parameters. They can be replaced with an  
asterisk (*) if CHAP is not used.  
<Mapping_Count>is 0, indicating that no mappings are specified and no further parameters are  
required.  
* * * An asterisk (*) represents the default value of a parameter.  
For example, your logon command might look like this:  
iscsicli PersistentLoginTarget  
iqn.1984-05.com.dell:powervault.6001372000ffe333000000004672edf2  
3260 T 192.168.130.101 * * * 0x2 * * * * * * * * * 0  
Setting Up Your iSCSI Storage Array  
55  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
To view active sessions to the target, use the following command:  
iscsicli SessionList  
To support storage array controller failover, the host server must be connected to at least one iSCSI  
port on each controller. Repeat step 3 for each iSCSI port on the storage array that you want to  
establish as a failover target. (The Target_Portal_Addresswill be different for each port you  
connect to).  
PersistentLoginTargetdoes not initiate a login to the target until after the system is rebooted.  
To establish immediate login to the target, substitute LoginTargetfor  
PersistentLoginTarget.  
NOTE: Refer to the Microsoft iSCSI Software Initiator 2.x User’s Guide for more information about the  
commands used in the previous steps. For more information about Windows Server 2008 Server Core, refer to  
If you are using a Linux Server  
If you configured CHAP authentication in the previous steps, you must restart iSCSI from the Linux  
command line as shown below. If you did not configure CHAP authentication, you do not need to restart  
iSCSI.  
/etc/init.d/iscsi restart  
Verify that the host server is able to connect to the storage array by running the iscsi -ls command as you  
did in target discovery. If the connection is successful, an iSCSI session will be established to each iSCSI  
port on the storage array.  
Sample output from the command should look similar to this:  
*******************************************************************************  
SFNet iSCSI Driver Version ...4:0.1.11-3(02-May-2006)  
*******************************************************************************  
TARGET NAME  
TARGET ALIAS  
HOST ID  
: iqn.1984-05.com.dell:powervault.6001372000f5f0e600000000463b9292  
:
: 2  
BUS ID  
: 0  
TARGET ID  
: 0  
TARGET ADDRESS  
SESSION STATUS  
SESSION ID  
: 192.168.0.110:3260,1  
: ESTABLISHED AT Wed May 9 18:20:27 CDT 2007  
: ISID 00023d000001 TSIH 5  
*******************************************************************************  
TARGET NAME  
TARGET ALIAS  
HOST ID  
: iqn.1984-05.com.dell:powervault.6001372000f5f0e600000000463b9292  
:
: 3  
BUS ID  
: 0  
TARGET ID  
TARGET ADDRESS  
: 0  
: 192.168.0.111:3260,1  
56  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
SESSION STATUS  
SESSION ID  
: ESTABLISHED AT Wed May 9 18:20:28 CDT 2007  
: ISID 00023d000002 TSIH 4  
*******************************************************************************  
Viewing the status of your iSCSI connections  
In MD Storage Manager, clicking the iSCSI tab and then Configure iSCSI Host Ports will show the  
status of each iSCSI port you attempted to connect and the configuration state of all IP addresses. If  
either displays Disconnected or Unconfigured, respectively, check the following and repeat the iSCSI  
Are all cables securely attached to each port on the host server and storage array?  
Is TCP/IP correctly configured on all target host ports?  
Is CHAP set up correctly on both the host server and the storage array?  
To review optimal network setup and configuration settings, see Guidelines for Configuring Your Network  
for iSCSI.  
Setting Up Your iSCSI Storage Array  
57  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Step 8: (Optional) Set Up In-Band Management  
Out-of-band management (see Step 1: Discover the Storage Array (Out-of-band management only)) is the  
recommended method for managing the storage array. However, to optionally set up in-band  
management, use the steps shown below.  
The default iSCSI host port IPv4 addresses are shown below for reference:  
Controller 0, Port 0: IP: 192.168.130.101  
Controller 0, Port 1: IP: 192.168.131.101  
Controller 1, Port 0: IP: 192.168.130.102  
Controller 1, Port 1: IP: 192.168.131.102  
NOTE: The management station you are using must be configured for network communication to the same IP  
subnet as the MD3000i host ports.  
NOTE: By default, the MD3000i host ports are not IPv6 enabled. To use IPv6 for in-band management, you  
must first connect either out-of-band, or in-band using the default IPv4 addresses. Once this is done, you can  
enable IPv6 and begin step 1 below using the IPv6 addresses.  
1
2
3
Establish an iSCSI session to the MD3000i RAID storage array.  
In either Windows or Linux, restart the SMagent service.  
Launch MD Storage Manager.  
If this is the first storage array to be set up for management, the Add New Storage Array window will  
appear. Otherwise, click New  
.
4
5
Select Manual and click OK  
.
Select In-band management and enter the host server name(s) or IP address(es) of the host server that  
is running the MD Storage Manager software.  
6
Click Add.  
In-band management should now be successfully configured.  
58  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Premium Features  
If you purchased premium features for your storage array, you can set them up at this point. Click  
ToolsView/Enable Premium Features or View and Enable Premium Features on the Initial Setup  
Tasks dialog box to review the features available.  
Advanced features supported by MD Storage Manager include:  
Snapshot Virtual Disk  
Virtual Disk Copy  
To install and enable these premium features, you must first purchase a feature key file for each feature  
and then specify the storage array that will host them. The Premium Feature Activation Card that shipped  
in the same box as your storage array gives instructions for this process.  
For more information on using these premium features, see the User’s Guide.  
Troubleshooting Tools  
The MD Storage Manager establishes communication with each managed array and determines the  
current array status. When a problem occurs on a storage array, MD Storage Manager provides several  
ways to troubleshoot the problem:  
Recovery Guru — The Recovery Guru diagnoses critical events on the storage array and recommends  
step-by-step recovery procedures for problem resolution. To access the Recovery Guru using MD  
Storage Manager, click Support Recover from Failure. The Recovery Guru can also be accessed from  
the Status area of the Summary page.  
Storage Array Profile — The Storage Array Profile provides an overview of your storage array  
configuration, including firmware versions and the current status of all devices on the storage array.  
To access the Storage Array Profile, click Support  
viewed by clicking the Storage array profile link in the Hardware Components area of the  
View storage array profile. The profile can also be  
Summary tab.  
Status Icons — Status icons identify the six possible health status conditions of the storage array.  
For every non-Optimal status icon, use the Recovery Guru to detect and troubleshoot the problem.  
Optimal — Every component in the managed array is in the desired working condition.  
Needs Attention — A problem exists with the managed array that requires intervention to  
correct it.  
Fixing — A Needs Attention condition has been corrected and the managed array is currently  
changing to an Optimal status.  
Unresponsive — The storage management station cannot communicate with the array, one  
controller, or both controllers in the storage array. Wait at least five minutes for the storage array to  
return to an Optimal status following a recovery procedure.  
Contacting Device — MD Storage Manager is establishing contact with the array.  
Setting Up Your iSCSI Storage Array  
59  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                 
Needs Upgrade — The storage array is running a level of firmware that is no longer supported by  
MD Storage Manager.  
Support Information Bundle — The Gather Support Information link on the Support tab saves all  
storage array data, such as profile and event log information, to a file that you can send if you seek  
technical assistance for problem resolution. It is helpful to generate this file before you contact Dell  
support with MD3000i-related issues.  
60  
Setting Up Your iSCSI Storage Array  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Uninstalling Software  
The following sections contain information on how to uninstall MD Storage Manager software from  
both host and management station systems.  
Uninstalling From Windows  
Use the Change/Remove Program feature to uninstall MD Storage Manager from a  
®
®
Microsoft Windows operating systems other than Windows Server 2008:  
1
2
3
From the Control Panel, double-click Add or Remove Programs.  
Select MD Storage Manager from the list of programs.  
Click Change/Remove, and follow the prompts to complete the uninstallation process.  
The Uninstall Complete window appears.  
4
Select Yes to restart the system, and then click Done.  
®
Use the following procedure to uninstall MD Storage Manager on Windows Server 2008 GUI  
versions:  
1
2
3
From the Control Panel, double-click Programs and Features  
Select MD Storage Manager from the list of programs.  
.
Click Uninstall/Change, then follow the prompts to complete the uninstallation process.  
The Uninstall Complete window appears.  
4
Select Yes to restart the system, then click Done.  
Use the following procedure to uninstall MD Storage Manager on Windows Server 2008 Core  
versions:  
1
Navigate to the \Program Files\Dell\MD Storage Manager\Uninstall  
Dell_MD_Storage_Manager directory.  
NOTE: By default, MD Storage Manager is installed in the \Program Files\Dell\MD Storage Manager  
directory. If another directory was used during installation, navigate to that directory before beginning  
the uninstall procedure.  
2
From the installation directory, type (command is case sensitive):  
Uninstall Dell_MD_Storage_Manager  
and press Enter  
.
Uninstalling Software  
61  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
3
4
From the Uninstall window, click Next and follow the on-screen instructions.  
Select Yes to restart the system, then click Done  
.
Uninstalling From Linux  
Use the following procedure to uninstall MD Storage Manager from a Linux system.  
1
By default, MD Storage Manager is installed in the /opt/dell/mdstoragemanager directory. If another  
directory was used during installation, navigate to that directory before beginning the Uninstall  
procedure.  
2
From the installation directory, type  
./uninstall_dell_mdstoragemanager  
and press <Enter>.  
3
4
From the Uninstall window, click Next, and follow the instructions that appear on the screen.  
While the software is uninstalling, the Uninstall window is displayed. When the uninstall procedure is  
complete, the Uninstall Complete window is displayed.  
Click Done  
.
62  
Uninstalling Software  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Guidelines for Configuring Your Network  
for iSCSI  
This section gives general guidelines for setting up your network environment and IP addresses for  
use with the iSCSI ports on your host server and storage array. Your specific network environment  
may require different or additional steps than shown here, so make sure you consult with your system  
administrator before performing this setup.  
Windows Host Setup  
If you are using a Windows host network, the following section provides a framework for preparing  
your network for iSCSI.  
To set up a Windows host network, you must configure the IP address and netmask of each iSCSI  
port connected to the storage array. The specific steps depend on whether you are using a Dynamic  
Host Configuration Protocol (DHCP) server, static IP addressing, Domain Name System (DNS)  
server, or Windows Internet Name Service (WINS) server.  
NOTE: The server IP addresses must be configured for network communication to the same IP subnet  
as the storage array management and iSCSI ports.  
If using a DHCP server  
1
On the Control Panel, select Network connections or Network and Sharing Center. Then click  
Manage network connections  
.
2
3
Right-click the network connection you want to configure and select Properties  
On the General tab (for a local area connection) or the Networking tab (for all other connections),  
select Internet Protocol (TCP/IP), and then click Properties  
.
4
Select Obtain an IP address automatically, then OK  
.
If using Static IP addressing  
1
On the Control Panel, select Network connections or Network and Sharing Center. Then click  
Manage network connections  
.
2
3
Right-click the network connection you want to configure and select Properties.  
On the General tab (for a local area connection) or the Networking tab (for all other connections),  
select Internet Protocol (TCP/IP), and then click Properties  
.
Network Configuration Guidelines  
63  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
4
Select Use the following IP address and enter the IP address, subnet mask, and default gateway  
addresses.  
If using a DNS server  
1
On the Control Panel, select Network connections or Network and Sharing Center. Then click  
Manage network connections  
.
2
3
Right-click the network connection you want to configure and select Properties.  
On the General tab (for a local area connection) or the Networking tab (for all other connections),  
select Internet Protocol (TCP/IP), and then click Properties  
.
4
Select Obtain DNS server address automatically or enter the preferred and alternate DNS server IP  
addresses and click OK  
.
If using a WINS server  
NOTE: If you are using a DHCP server to allocate WINS server IP addresses, you do not need to add WINS  
server addresses.  
1
2
3
On the Control Panel, select Network connections  
.
Right-click the network connection you want to configure and select Properties  
.
On the General tab (for a local area connection) or the Networking tab (for all other connections),  
select Internet Protocol (TCP/IP), and then click Properties  
.
4
5
6
Select Advanced, then the WINS tab, and click Add  
.
In the TCP/IP WINS server window, type the IP address of the WINS server and click Add  
.
To enable use of the Lmhosts file to resolve remote NetBIOS names, select Enable LMHOSTS  
lookup  
.
7
8
To specify the location of the file that you want to import into the Lmhosts file, select Import  
LMHOSTS and then select the file in the Open dialog box  
Enable or disable NetBIOS over TCP/IP.  
If using Windows 2008 Core Version  
On a server running Windows 2008 Core version, use the netsh interface command to configure the  
iSCSI ports on the host server.  
Linux Host Setup  
If you are using a Linux host network, the following section provides a framework for preparing your  
network for iSCSI.  
To set up a Linux host network, you must configure the IP address and netmask of each iSCSI port  
connected to the storage array. The specific steps depend on whether you are configuring TCP/IP using  
Dynamic Host Configuration Protocol (DHCP) or configuring TCP/IP using a static IP address.  
64  
Network Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
NOTE: The server IP addresses must be configured for network communication to the same IP subnet as the  
storage array management and iSCSI ports.  
Configuring TCP/IP on Linux using DHCP (root users only)  
1
Edit the /etc/sysconfig/network file as follows:  
NETWORKING=yes  
HOSTNAME=mymachine.mycompany.com  
2
Edit the configuration file for the connection you want to configure, either /etc/sysconfig/network-  
scripts/ifcfg-ethX (for RHEL) or /etc/sysconfig/network/ifcfg-eth-id-XX:XX:XX:XX:XX (for SUSE).  
BOOTPROTO=dhcp  
Also, verify that an IP address and netmask are not defined.  
3
Restart network services using the following command:  
/etc/init.d/network restart  
Configuring TCP/IP on Linux using a Static IP address (root users only)  
1
Edit the /etc/sysconfig/network file as follows:  
NETWORKING=yes  
HOSTNAME=mymachine.mycompany.com  
GATEWAY=255.255.255.0  
2
Edit the configuration file for the connection you want to configure, either /etc/sysconfig/network-  
scripts/ifcfg-ethX (for RHEL) or /etc/sysconfig/network/ifcfg-eth-id-XX:XX:XX:XX:XX (for SUSE).  
BOOTPROTO=static  
BROADCAST=192.168.1.255  
IPADDR=192.168.1.100  
NETMASK=255.255.255.0  
NETWORK=192.168.1.0  
ONBOOT=yes  
TYPE=Ethernet  
HWADDR=XX:XX:XX:XX:XX:XX  
GATEWAY=192.168.1.1  
3
Restart network services using the following command:  
/etc/init.d/network restart  
Network Configuration Guidelines  
65  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
66  
Network Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
A
alerts, 38  
iSNS, 35  
C
cabling, 9-10  
nonredundancy, 10  
CHAP, 45  
target, 45  
Linux host and server, 41  
cluster host  
setting up, 24  
iSCSI, 31  
DHCP, 63  
DNS, 64  
discovery, 36  
Linux, 64  
D
disk group, 8  
target discovery, 40  
password, 37  
post-installation  
configuration, 37  
E
iSCSI configuration  
worksheet, 33-34  
enclosure connections, 9  
event monitor, 24-26  
expansion, 15  
Premium Features, 59  
premium features, 9  
iSCSI initiator, 19  
installing on Windows, 20  
Index  
67  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
R
RAID, 8  
RDAC MPP driver, 26  
readme, 28-29  
S
status, 37, 60  
VSS, 25  
status icons, 37, 59  
storage array, 8  
W
Storage Array Profile, 59  
Windows, 19, 28, 61  
planning, 9  
T
troubleshooting, 59  
68  
Index  
Download from Www.Somanuals.com. All Manuals Search And Download.  

Conair Hair Clippers HC701 User Manual
Craftsman Lawn Mower Accessory 61024600 User Manual
Delta Electronics Power Supply SILM4018L User Manual
Dimplex Indoor Fireplace BF33ST User Manual
Electro Voice Microphone 664 User Manual
Electro Voice Portable Speaker MTS 1 User Manual
Electro Voice Portable Speaker PIM 152 User Manual
Escient Network Card CH 1 User Manual
Furman Sound Power Supply PL PRO DE II User Manual
Genesis Advanced Technologies Speaker Servo Subwoofer User Manual