DELL SCV2000 User Manual

Dell Storage Center  
SCv2000 and SCv2020 Storage System  
Deployment Guide  
Contents  
About this Guide.......................................................................................................7  
3
4
A Adding or Removing an Expansion Enclosure..............................................89  
Adding Multiple Expansion Enclosures to a Storage System Deployed without Expansion  
Enclosures...........................................................................................................................................89  
5
B Troubleshooting Storage Center.................................................................. 107  
6
Preface  
About this Guide  
This guide describes the features and technical specifications of an SCv2000/SCv2020 storage system.  
Revision History  
Document Number: 8X7FK  
Revision  
Date  
Description  
A00  
April 2015  
Initial release  
Audience  
The information provided in this Deployment Guide is intended for storage or network administrators and  
deployment personnel.  
Contacting Dell  
Dell provides several online and telephone-based support and service options. Availability varies by  
country and product, and some services may not be available in your area.  
To contact Dell for sales, technical support, or customer service issues go to www.dell.com/support.  
For customized support, enter your system Service Tag on the support page and click Submit.  
For general support, browse the product list on the support page and select your product.  
Related Publications  
The following documentation is available for the SCv2000/SCv2020 storage system.  
Dell Storage Center SCv2000 and SCv2020 Storage System Getting Started Guide  
Provides information about an SCv2000/SCv2020 storage system, such as installation instructions  
and technical specifications.  
Dell Storage Center SCv2000 and SCv2020 Storage System Owner’s Manual  
Provides information about an SCv2000/SCv2020 storage system, such as hardware features,  
replacing customer replaceable components, and technical specifications.  
Dell Storage Center SCv2000 Series Virtual Media Update Instructions  
Describes how to install Storage Center software on an SCv2000/SCv2020 storage system using  
virtual media. Installing Storage Center software using the Storage Center Virtual Media option is  
intended for use only by sites that cannot update Storage Center using standard methods.  
Dell Storage Center Release Notes  
Contains information about new features and known and resolved issues for the Storage Center  
software.  
7
         
Dell Storage Client Administrator’s Guide  
Provides information about the Dell Storage Client and how it can be used to manage a Storage  
Center.  
Dell Storage Center Software Update Guide  
Describes how to upgrade Storage Center software from an earlier version to the current version.  
Dell Storage Center Command Utility Reference Guide  
Provides instructions for using the Storage Center Command Utility. The Command Utility provides a  
command-line interface (CLI) to enable management of Storage Center functionality on Windows,  
Linux, Solaris, and AIX platforms.  
Dell Storage Center Command Set for Windows PowerShell  
Provides instructions for getting started with Windows PowerShell cmdlets and scripting objects that  
interact with the Storage Center via the PowerShell interactive shell, scripts, and PowerShell hosting  
applications. Help for individual cmdlets is available online.  
Dell TechCenter  
Provides technical white papers, best practice guides, and frequently asked questions about Dell  
8
1
About the SCv2000/SCv2020 Storage  
System  
The SCv2000/SCv2020 storage system provides the central processing capabilities for the Storage  
Center Operating System (OS) and management of RAID storage.  
The SCv2000/SCv2020 storage system holds the physical disks that provide storage for the Storage  
Center. If additional storage is needed, the SCv2000/SCv2020 also supports SC100/SC120 expansion  
enclosures.  
Storage Center Hardware Components  
The Storage Center described in this document consists of an SCv2000 or SCv2020 storage system,  
enterprise-class switches, and SC100/SC120 expansion enclosures.  
The SCv2000/SCv2020 storage system supports multiple SC100/SC120 expansion enclosures to allow  
for storage expansion.  
NOTE: The cabling between the storage system, switches, and host servers is referred to as front  
end connectivity. The cabling between the storage system and expansion enclosures is referred to  
as back-end connectivity.  
SCv2000/SCv2020 Storage System  
The SCv2000 and SCv2020 are 2U storage systems with built-in storage. The SCv2000 is a 2U storage  
system that supports up to 12 internal 3.5–inch hot-swappable SAS hard drives installed in a four-  
column, three-row configuration. The SCv2020 is a 2U storage system that supports up to 24 internal  
2.5–inch hot-swappable SAS hard drives installed vertically side-by-side.  
The SCv2000/SCv2020 storage system contains two redundant power supply/cooling fan modules and  
up to two storage controllers with multiple IO ports that provide communication with servers and  
expansion enclosures.  
Switches  
Dell offers enterprise-class switches as part of the total Storage Center solution.  
The SCv2000/SCv2020 supports Fibre Channel (FC) and Ethernet switches, which provide robust  
connectivity to servers and allow for the use of redundant transport paths. Fibre Channel (FC) or Ethernet  
switches can provide connectivity to a remote Storage Center to allow for replication of data. In addition,  
Ethernet switches provide connectivity to a management network to allow configuration, administration,  
and management of the Storage Center.  
About the SCv2000/SCv2020 Storage System  
9
       
Expansion Enclosures  
An SCv2000/SCv2020 storage system supports multiple SC100/SC120 expansion enclosures. The  
expansion enclosures allow the data storage capabilities of the SCv2000/SCv2020 to be expanded  
beyond the 12 or 24 internal drives in the storage system chassis.  
The SCv2000/SCv2020 storage system supports a total of 168 drives per Storage Center system. This  
total includes the drives in the SCv2000/SCv2020 storage system chassis and the drives in SC100/SC120  
expansion enclosures chassis.  
The SCv2000 supports up to thirteen SC100 expansion enclosures, up to six SC120 expansion  
enclosures, or any combination of SC100/SC120 expansion enclosures as long as the total drive  
count of the system does not exceed 168.  
The SCv2020 supports up to twelve SC100 expansion enclosures, up to six SC120 expansion  
enclosures, or any combination of SC100/SC120 expansion enclosures as long as the total drive  
count of the system does not exceed 168.  
Storage Center Architecture Options  
A Storage Center with an SCv2000/SCv2020 storage system can be deployed in two configurations:  
An SCv2000/SCv2020 storage system without SC100/SC120 expansion enclosures.  
Figure 1. SCv2000/SCv2020 without Expansion Enclosures  
An SCv2000/SCv2020 storage system with one or more SC100/SC120 expansion enclosures.  
NOTE: A storage system with a single storage controller cannot be deployed with expansion  
enclosures. The storage system must have dual storage controllers to deploy expansion  
enclosures.  
Figure 2. SCv2000/SCv2020 with Two Expansion Enclosures  
About the SCv2000/SCv2020 Storage System  
10  
   
Storage Center Replication  
Storage Center sites can be collocated or remotely connected and data can be replicated between sites.  
Storage Center replication can duplicate volume data to another site in support a disaster recovery plan  
or to provide local access to a remote data volume. Typically, data is replicated remotely as part of an  
overall disaster avoidance or recovery plan.  
The SCv2000 series Storage Center supports replication to other SCv2000 series Storage Centers.  
However, an Enterprise Manager Data Collector must be used to replicate data between the storage  
systems.  
For more information on installing an Enterprise Manager Data Collector, see the Dell Enterprise  
Manager Installation Guide.  
For more information on managing the Data Collector and setting up replications, see the Dell  
Enterprise Manager Administrator’s Guide.  
Storage Center Communication  
A Storage Center uses multiple types of communication for both data transfer and administrative  
functions.  
Storage Center communication is classified into three types: front end, back end, and system  
administration.  
Front-End Connectivity  
Front-end connectivity provides IO paths from servers to a storage system and replication paths from  
one Storage Center to another Storage Center. The SCv2000/SCv2020 provides three types of front-end  
connectivity:  
Fibre Channel: Hosts, servers, or Network Attached Storage (NAS) appliances access storage by  
connecting to the storage system Fibre Channel ports through one or more Fibre Channel switches.  
Connecting host servers directly to the storage system without using Fibre Channel switches is not  
supported.  
When replication is licensed, the SCv2000/SCv2020 can use the Fibre Channel ports to replicate data  
to another Storage Center.  
iSCSI: Hosts, servers, or Network Attached Storage (NAS) appliances access storage by connecting to  
the storage system iSCSI ports through one or more Ethernet switches. Connecting host servers  
directly to the storage system without using Ethernet switches is not supported.  
When replication is licensed, the SCv2000/SCv2020 can use the iSCSI ports to replicate data to  
another Storage Center  
NOTE: If replication is licensed, the SCv2000/SCv2020 can use the embedded REPL port to  
perform iSCSI replication to another SCv2000 series Storage Center.  
If replication is licensed and the Flex Port license is installed, the SCv2000/SCv2020 can use the  
embedded MGMT port to perform iSCSI replication to another SCv2000 series Storage Center.  
In addition, the SCv2000/SCv2020 can use the embedded MGMT and REPL ports as front-end  
iSCSI ports for connectivity to host servers when the Flex Port license is installed.  
SAS: Hosts or servers access storage by connecting directly to the storage system SAS ports.  
NOTE: The front-end connectivity ports are located on the back of the storage system, but are  
designated as front-end ports.  
About the SCv2000/SCv2020 Storage System  
11  
     
SCv2000/SCv2020 Storage System with Fibre Channel Front-End Connectivity  
An SCv2000/SCv2020 storage system with Fibre Channel front-end connectivity may communicate with  
the following components of a Storage Center system.  
Figure 3. Storage System with Fibre Channel Front-End Connectivity  
Item Description  
Speed  
Communication Type  
1
Server with Fibre Channel Host Bus Adapters 8 Gbps or 16 Gbps  
(HBAs)  
Front End  
2
3
Fibre Channel switch  
8 Gbps or 16 Gbps  
Front End  
Front End  
SCv2000/SCv2020 storage system with FC 8 Gbps or 16 Gbps  
front-end connectivity  
4
5
SC100/SC120 expansion enclosures  
6 Gbps per channel  
Back End  
Front End  
Remote Storage Center connected via iSCSI 1 Gbps  
for replication  
6
7
Ethernet switch  
1 Gbps  
Front End  
Management network (Computer  
connected to the storage system through  
the Ethernet switch)  
Up to 1 Gbps  
System Administration  
About the SCv2000/SCv2020 Storage System  
12  
SCv2000/SCv2020 Storage System with iSCSI Front-End Connectivity  
An SCv2000/SCv2020 storage system with iSCSI front-end connectivity may communicate with the  
following components of a Storage Center system.  
Figure 4. Storage System with iSCSI Front-End Connectivity  
Item Description  
Speed  
Communication Type  
1
Server with Ethernet (iSCSI) ports or iSCSI  
1 Gbps or 10 Gbps  
Front End  
Host Bus Adapters (HBAs)  
2
3
Ethernet switch  
1 Gbps or 10 Gbps  
1 Gbps or 10 Gbps  
Front End  
Front End  
SCv2000/SCv2020 storage system with  
iSCSI front-end connectivity  
4
5
SC100/SC120 expansion enclosures  
6 Gbps per channel  
Back End  
Front End  
Remote Storage Center connected via iSCSI 1 Gbps or 10 Gbps  
for replication  
6
Management network (Computer  
connected to the storage system through  
the Ethernet switch)  
Up to 1 Gbps  
System Administration  
About the SCv2000/SCv2020 Storage System  
13  
SCv2000/SCv2020 Storage System with SAS Front-End Connectivity  
An SCv2000/SCv2020 storage system with SAS front-end connectivity may communicate with the  
following components of a Storage Center system.  
Figure 5. Storage System with SAS Front-End Connectivity  
Item Description  
Speed  
Communication Type  
Front End  
1
Server with SAS Host Bus Adapters (HBAs)  
Ethernet switch  
12 Gbps per channel  
1 Gbps  
2
3
Front End  
SCv2000/SCv2020 storage system with SAS 12 Gbps per channel  
front-end connectivity  
Front End  
4
5
SC100/SC120 expansion enclosures  
6 Gbps per channel  
Back End  
Front End  
Remote Storage Center connected via iSCSI 1 Gbps  
for replication  
6
Management network (Computer  
connected to the storage system through  
the Ethernet switch)  
Up to 1 Gbps  
System Administration  
Back-End Connectivity  
Back-end connectivity is strictly between the storage system and expansion enclosures, which hold the  
physical drives that provide back-end expansion storage.  
The SCv2000/SCv2020 supports SAS connectivity to multiple SC100/SC120 expansion enclosures.  
About the SCv2000/SCv2020 Storage System  
14  
 
System Administration  
To perform system administration, the Storage Center communicates with computers using the Ethernet  
management (MGMT) port.  
The Ethernet management port is used for Storage Center configuration, administration, and  
management.  
NOTE: The baseboard management controller (BMC) does not have a separate physical port on the  
SCv2000/SCv2020. The BMC is accessed through the same Ethernet management port that is used  
for Storage Center configuration, administration, and management.  
SCv2000/SCv2020 Storage System Hardware  
The SCv2000/SCv2020 storage system ships with Dell Enterprise drives, two power supply/cooling fan  
modules, and either one storage controller or two redundant storage controllers.  
Each storage controller contains the front-end, back-end, and management communication ports of the  
storage system.  
SCv2000/SCv2020 Storage System Front Panel Features and Indicators  
The front panel of the SCv2000/SCv2020 contains power and status indicators, a system identification  
button, and a seven-segment display.  
In addition, the hard drives are installed and removed through the front of the storage system chassis.  
Figure 6. SCv2000 Front Panel View  
Item  
Name  
Icon  
Description  
1
Power indicator  
Lights when the storage system power is on.  
Off: No power  
On steady green: At least one power supply is providing  
power to the storage system  
2
Status indicator  
Lights when at least one power supply is supplying power to  
the storage system.  
Off: No power  
On steady blue: Power is on and firmware is running  
Blinking blue:Storage system is busy booting or updating  
About the SCv2000/SCv2020 Storage System  
15  
     
Item  
Name  
Icon  
Description  
On steady amber: Hardware detected fault  
Blinking amber: Software detected fault  
3
Identification  
button  
Lights when the storage system identification is enabled.  
Off: Normal status  
Blinking blue: Storage system identification enabled  
4
5
Unit ID display  
Hard drives  
Displays the storage system identification number (01).  
Up to 12 3.5–inch or 24 2.5–inch SAS hard drives  
SCv2000/SCv2020 Back-Panel Features and Indicators  
The back panel of the SCv2000/SCv2020 shows the storage controller indicators and power supply  
indicators.  
Figure 7. SCv2000/SCv2020 Back Panel View  
Item  
Name  
Icon  
Description  
1
Power supply/  
cooling fan module  
(PSU) (2)  
Contains a 580 W power supply and fans that provide cooling  
for the storage system.  
2
3
Battery backup unit  
(BBU) (2)  
Allows the storage controller to shut down gracefully when a  
loss of AC power is detected.  
Storage controller  
(1 or 2)  
Each storage controller module contains:  
Back-end ports: Two 6 Gbps SAS ports  
Front-end ports: Fibre Channel ports, iSCSI ports, or SAS  
ports  
MGMT port: Embedded Ethernet/iSCSI port, which is  
typically used for system management  
NOTE: The MGMT port can share iSCSI traffic if the  
Flex Port license is installed.  
About the SCv2000/SCv2020 Storage System  
16  
 
Item  
Name  
Icon  
Description  
REPL port: Embedded iSCSI port, which is typically used  
for replication to another Storage Center  
4
Cooling fan fault  
indicator (2)  
Off: Normal operation  
Steady amber: Fan fault or there is a problem  
communicating with the PSU  
Blinking amber: PSU is in programming mode  
5
6
AC power fault  
indicator (2)  
Off: Normal operation  
Steady Amber: A PSU has been removed or there is a  
problem communicating with the PSU  
Blinking amber: PSU is in programming mode  
AC power status  
indicator (2)  
Off: AC power is off, or the power is on but the module is  
not in a controller, or it may indicate a hardware fault  
Steady green: AC power is on  
Blinking green: AC power is on and the PSU is in standby  
mode  
7
DC power fault  
indicator (2)  
Off: Normal operation  
Steady amber: A PSU has been removed, or there is a DC  
or other hardware fault, or there is a problem  
communicating with the PSU  
Blinking amber: PSU is in programming mode  
8
9
Power socket (2)  
Power switch (2)  
Accepts a standard computer power cord.  
Controls power for the storage system. There is one switch for  
each power supply/cooling fan module.  
SCv2000/SCv2020 Storage Controller Features and Indicators  
The SCv2000/SCv2020 includes up to two storage controllers in two interface slots.  
The storage controllers support Fibre Channel, iSCSI, or SAS front-end ports.  
SCv2000/SCv2020 Storage Controller with Fibre Channel Front-End Ports  
Features and indicators on a storage controller with Fibre Channel front-end ports.  
Figure 8. Storage Controller with Four 8 Gb Fibre Channel Front-End Ports  
About the SCv2000/SCv2020 Storage System  
17  
 
Figure 9. Storage Controller with Two 16 Gb Fibre Channel Front-End Ports  
Item Control/Feature Icon Description  
1
Battery status indicator  
Blinking green (on 0.5 sec. / off 1.5 sec.): Battery heartbeat  
Fast blinking green (on 0.5 sec. / off 0.5 sec.): Battery is  
charging  
Steady green: Battery is ready  
2
3
Battery fault indicator  
MGMT port  
Off: No faults  
Blinking amber: Correctable fault detected  
Steady amber: Uncorrectable fault detected; replace battery  
10 Mbps, 100 Mbps, or 1 Gbps Ethernet/iSCSI port used for  
storage system management and access to the BMC  
NOTE: To use the MGMT port as an iSCSI port for  
replication to another Storage Center, a Flex Port license  
and replication license are required. To use the MGMT port  
as a front-end connection to host servers, a Flex Port  
license is required.  
4
5
REPL port  
10 Mbps, 100 Mbps, or 1 Gbps Ethernet/iSCSI port used for  
replication to another Storage Center (requires a replication  
license)  
NOTE: To use the RELP port as a front-end connection to  
host servers, a Flex Port license is required.  
SAS activity indicators  
Off: Port is off  
Steady green: Port is on, but without activity  
Blinking green: Port is on and there is activity  
6
7
Storage controller  
module status  
On: Storage controller completed POST  
Recessed power off  
button  
Powers down the storage controller if held for more than five  
seconds  
8
Storage controller  
module fault  
Off: No faults  
Steady amber: Firmware has detected an error  
Blinking amber:Storage controller is performing POST  
9
Recessed reset button  
Reboots the storage controller forcing it to restart at the POST  
process  
About the SCv2000/SCv2020 Storage System  
18  
Item Control/Feature  
Icon Description  
10  
Identification LED  
Off: Identification disabled  
Blinking blue (for 15 sec.): Identification is enabled  
Blinking blue (continuously): Storage controller shut down  
to the Advanced Configuration and Power Interface (ACPI)  
S5 state  
Not for customer use on SCv2000/SCv2020 Storage Systems.  
11  
USB port  
12  
Diagnostic LEDs (8)  
Green LEDs 0–3: Low byte hex POST code  
Green LEDs 4–7: High byte hex POST code  
13  
14  
Serial port (3.5 mm mini  
jack)  
Not for customer use.  
Four or two Fibre  
Channel ports with three  
LEDs per port  
All off: No power  
All on: Booting up  
Blinking amber: 2 Gbps activity  
Blinking green: 4 Gbps activity  
Blinking yellow: 8 Gbps activity  
Blinking amber and yellow: Beacon  
All blinking (simultaneous): Firmware initialized  
All blinking (alternating): Firmware fault  
15  
16  
Mini-SAS port B  
Mini-SAS port A  
Back-end expansion port B  
Back-end expansion port A  
SCv2000/SCv2020 Storage Controller with iSCSI Front-End Ports  
Features and indicators on a storage controller with iSCSI front-end ports.  
Figure 10. SCv2000/SCv2020 Storage Controller with Four 1 GbE iSCSI Front-End Ports  
About the SCv2000/SCv2020 Storage System  
19  
Figure 11. SCv2000/SCv2020 Storage Controller with Two 10 GbE iSCSI Front-End Ports  
Item Control/Feature Icon Description  
1
Battery status indicator  
Blinking green (on 0.5 sec. / off 1.5 sec.): Battery heartbeat  
Fast blinking green (on 0.5 sec. / off 0.5 sec.): Battery is  
charging  
Steady green: Battery is ready  
2
3
Battery fault indicator  
MGMT port  
Off: No faults  
Blinking amber: Correctable fault detected  
Steady amber: Uncorrectable fault detected; replace battery  
10 Mbps, 100 Mbps, or 1 Gbps Ethernet port used for storage  
system management and access to the BMC  
NOTE: To use the MGMT port as an iSCSI port for  
replication to another Storage Center, a Flex Port license  
and replication license are required. To use the MGMT port  
as a front-end connection to host servers, a Flex Port  
license is required.  
4
5
REPL port  
10 Mbps, 100 Mbps, or 1 Gbps Ethernet/iSCSI port used for  
replication to another Storage Center  
NOTE: To use the RELP port as a front-end connection to  
host servers, a Flex Port license is required.  
SAS activity indicators  
Off: Port is off  
Steady green: Port is on, but without activity  
Blinking green: Port is on and there is activity  
6
7
Storage controller  
module status  
On: Storage controller completed POST  
Recessed power off  
button  
Powers down the storage controller if held for more than five  
seconds  
8
Storage controller  
module fault  
Off: No faults  
Steady amber: Firmware has detected an error  
Blinking amber:Storage controller is performing POST  
9
Recessed reset button  
Identification LED  
Reboots the storage controller forcing it to restart at the POST  
process  
10  
Off: Identification disabled  
Blinking blue (for 15 sec.): Identification is enabled  
About the SCv2000/SCv2020 Storage System  
20  
Item Control/Feature  
Icon Description  
Blinking blue (continuously): Storage controller shut down  
to the Advanced Configuration and Power Interface (ACPI)  
S5 state  
Not for customer use on SCv2000/SCv2020 Storage Systems.  
11  
USB port  
12  
Diagnostic LEDs (8)  
Green LEDs 0–3: Low byte hex POST code  
Green LEDs 4–7: High byte hex POST code  
13  
14  
Serial port (3.5 mm mini  
jack)  
Not for customer use.  
Four or two iSCSI ports  
with two LEDs per port  
Off: No power  
Steady Amber: Link  
Blinking Green: Activity  
15  
16  
Mini-SAS port B  
Mini-SAS port A  
Back-end expansion port B  
Back-end expansion port A  
SCv2000/SCv2020 Storage Controller with SAS Front-End Ports  
Features and indicators on a storage controller with SAS front-end ports.  
Figure 12. SCv2000/SCv2020 Storage Controller with Four 12 Gb SAS Front-End Ports  
Item Control/Feature  
Icon Description  
1
Battery status indicator  
Blinking green (on 0.5 sec. / off 1.5 sec.): Battery heartbeat  
Fast blinking green (on 0.5 sec. / off 0.5 sec.): Battery is  
charging  
Steady green: Battery is ready  
2
3
Battery fault indicator  
MGMT port  
Off: No faults  
Blinking amber: Correctable fault detected  
Steady amber: Uncorrectable fault detected; replace battery  
10 Mbps, 100 Mbps, or 1 Gbps Ethernet port used for storage  
system management and access to the BMC  
About the SCv2000/SCv2020 Storage System  
21  
Item Control/Feature  
Icon Description  
NOTE: To use the MGMT port as an iSCSI port for  
replication to another Storage Center, a Flex Port license  
and replication license are required. To use the MGMT port  
as a front-end connection to host servers, a Flex Port  
license is required.  
4
5
REPL port  
10 Mbps, 100 Mbps, or 1 Gbps Ethernet/iSCSI port used for  
replication to another Storage Center  
NOTE: To use the RELP port as a front-end connection to  
host servers, a Flex Port license is required.  
SAS activity indicators  
Off: Port is off  
Steady green: Port is on, but without activity  
Blinking green: Port is on and there is activity  
6
7
Storage controller  
module status  
On: Storage controller completed POST  
Recessed power off  
button  
Powers down the storage controller if held for more than five  
seconds  
8
Storage controller  
module fault  
Off: No faults  
Steady amber: Firmware has detected an error  
Blinking amber:Storage controller is performing POST  
9
Recessed reset button  
Identification LED  
Reboots the storage controller forcing it to restart at the POST  
process  
10  
Off: Identification disabled  
Blinking blue (for 15 sec.): Identification is enabled  
Blinking blue (continuously): Storage controller shut down  
to the Advanced Configuration and Power Interface (ACPI)  
S5 state  
Not for customer use on SCv2000/SCv2020 Storage Systems.  
11  
USB port  
12  
Diagnostic LEDs (8)  
Green LEDs 0–3: Low byte hex POST code  
Green LEDs 4–7: High byte hex POST code  
13  
14  
Serial port (3.5 mm mini  
jack)  
Not for customer use.  
Four Mini-SAS High  
Density (HD) ports  
Front-end connectivity ports  
NOTE:  
The mini-SAS HD ports are for front-end connectivity only and  
cannot be used for back-end expansion.  
15  
16  
Mini-SAS port B  
Mini-SAS port A  
Back-end expansion port B  
Back-end expansion port A  
About the SCv2000/SCv2020 Storage System  
22  
SCv2000/SCv2020 Drives  
Dell Enterprise hard disk drives (HDDs) and Enterprise Solid-State Drives (eSSDs) are the only drives that  
can be installed in an SCv2000/SCv2020 storage system. If a non-Dell Enterprise drive is installed,  
Storage Center prevents the drive from being managed.  
The drives in an SCv2000 storage system are installed horizontally. The drives in an SCv2020 storage  
system are installed vertically. The indicators on the drives provide status and activity information.  
Figure 13. SCv2000/SCv2020 Drive Indicators  
Item  
Name  
Indicator Code  
1
Drive activity  
indicator  
Blinking green: Drive activity  
Steady green: Drive is detected and there are no faults  
2
Drive status  
indicator  
Off: Normal operation  
Blinking amber (on 1 sec. / off 1 sec.): Drive identification is enabled  
Blinking amber (on 2 sec. / off 1 sec.): Drive failed  
Steady amber: Drive is safe to remove  
SCv2000/SCv2020 Storage System Drive Numbering  
In an SCv2000/SCv2020 storage system, the drives are numbered from left to right.  
Dell Storage Client identifies drives as XX-YY, where XX is the number of the unit ID of the storage  
system, and YY is the drive position inside the storage system.  
An SCv2000 holds up to 12 drives, which are numbered left to right in rows starting from 0 at the top-  
left drive.  
Figure 14. SCv2000 Drive Numbering  
An SCv2020 holds up to 24 drives, which are numbered left to right starting from 0.  
Figure 15. SCv2020 Drive Numbering  
About the SCv2000/SCv2020 Storage System  
23  
   
SC100/SC120 Expansion Enclosure Overview  
The SC100 is a 2U expansion enclosure that supports up to 12 3.5inch hard drives installed in a four‐  
column, three-row configuration. The SC120 is a 2U expansion enclosure that supports up to 24 2.5inch  
hard drives installed vertically sidebyside.  
The SC100/SC120 expansion enclosure ships with two redundant power supply/cooling fan modules and  
two redundant enclosure management modules (EMMs).  
SC100/SC120 Expansion Enclosure Front Panel Features and Indicators  
The SC100/SC120 front panel shows the expansion enclosure status and power supply status.  
Figure 16. SC100 Front-Panel Features and Indicators  
Figure 17. SC120 Front-Panel Features and Indicators  
Item  
Name  
Icon  
Description  
1
Expansion enclosure  
status indicator  
Lights when the expansion enclosure power is on.  
Off: No power  
On steady blue: Normal operation  
Blinking blue: Indicates that Storage Center is  
identifying the enclosure  
On steady amber: Expansion enclosure is turning on or  
was reset  
Blinking amber: Expansion enclosure is in the fault  
state.  
2
Power supply status  
indicator  
Lights when at least one power supply is supplying power  
to the expansion enclosure.  
Off: Both power supplies are off.  
On steady green: At least one power supply is providing  
power to the expansion enclosure  
About the SCv2000/SCv2020 Storage System  
24  
   
Item  
Name  
Icon  
Description  
Dell Enterprise Plus Drives  
3
Hard drives  
SC100: Up to 12 3.5-inch hard drives  
SC120: Up to 24 2.5-inch hard drives  
SC100/SC120 Expansion Enclosure Back Panel Features and Indicators  
The SC100/SC120 back panel provides controls to power up and reset the expansion enclosure,  
indicators to show the expansion enclosure status, and connections for back-end cabling.  
Figure 18. SC100/SC120 Back Panel View  
Item  
Name  
Icon  
Description  
1
DC power indicator  
Green: Normal operation. The power supply  
module is supplying DC power to the expansion  
enclosure  
Off: Power switch is off, the power supply is not  
connected to AC power, or there is a fault condition  
2
3
Power supply/cooling fan  
indicator  
Amber: Power supply/cooling fan fault is detected  
Off: Normal operation  
AC power indicator  
Green: Power supply module is connected to a  
source of AC power, whether or not the power  
switch is on  
Off: Power supply module is disconnected from a  
source of AC power  
4
5
Power socket (2)  
Accepts a standard computer power cord.  
Power switches (2)  
Controls power for the expansion enclosure. There is  
one switch for each power supply/cooling fan module.  
6
7
Power supply/cooling fan  
modules (2)  
Contains a 700 W power supply and fans that provide  
cooling for the expansion enclosure.  
Enclosure Management  
Modules (2)  
EMMs provide the data path and management  
functions for the expansion enclosure.  
About the SCv2000/SCv2020 Storage System  
25  
 
SC100/SC120 Expansion Enclosure EMM Features and Indicators  
The SC100/SC120 includes two Enclosure Management Modules (EMMs) in two interface slots.  
Figure 19. SC100/SC120 EMM Features and Indicators  
Item  
Name  
Icon  
Description  
1
System status  
indicator  
Not used on SC100/SC120 expansion enclosures.  
2
3
Serial port  
Not for customer use.  
SAS port A (in)  
Connects to a storage controller or to other SC100/SC120  
expansion enclosures. SAS ports A and B can be used for either  
input or output. However for cabling consistency, use port A as  
an input port.  
4
5
6
7
Port A link  
status  
Green: All the links to the port are connected  
Amber: One or more links are not connected  
Off: Expansion enclosure is not connected  
SAS port B  
(out)  
Connects to a storage controller or to other SC100/SC120  
expansion enclosures. SAS ports A and B can be used for either  
input or output. However for cabling consistency, use port B as  
an output port.  
Port B link  
status  
Green: All the links to the port are connected  
Amber: One or more links are not connected  
Off: Expansion enclosure is not connected  
EMM status  
indicator  
On steady green: Normal operation  
Amber: Expansion enclosure did not boot or is not properly  
configured  
Blinking green: Automatic update in process  
Blinking amber (two times per sequence): Expansion  
enclosure is unable to communicate with other expansion  
enclosures  
Blinking amber (four times per sequence): Firmware update  
failed  
Blinking amber (five times per sequence): Firmware versions  
are different between the two EMMs  
About the SCv2000/SCv2020 Storage System  
26  
 
SC100/SC120 Expansion Enclosure Drives  
Dell Enterprise hard disk drives (HDDs) and Enterprise Solid-State Drives (eSSDs) are the only drives that  
can be installed in SC100/SC120 expansion enclosures. If a non-Dell Enterprise drive is installed, Storage  
Center prevents the drive from being managed.  
The drives in an SC100 expansion enclosure are installed horizontally. The drives in an SC120 expansion  
enclosure are installed vertically. The indicators on the drives provide status and activity information.  
Figure 20. SC100/SC120 Drive Indicators  
Item  
Name  
Indicator Code  
1
Drive activity  
indicator  
Blinking green: Indicates drive activity  
Steady green: Indicates no drive activity  
2
Drive status  
indicator  
Steady green: Normal operation  
Blinking green (on 1 sec. / off 1 sec.): Drive identification is enabled  
Off: No power to the drive  
SC100/SC120 Expansion Enclosure Drive Numbering  
In an SC100/SC120 expansion enclosure, the drives are numbered from left to right starting from 0.  
Dell Storage Client identifies drives as XX-YY, where XX is the unit ID of the expansion enclosure that  
contains the drive, and YY is the drive position inside the expansion enclosure.  
An SC100 holds up to 12 drives, which are numbered in rows starting from 0 at the top-left drive.  
Figure 21. SC100 Drive Numbering  
An SC120 holds up to 24 drives, which are numbered left to right starting from 0.  
Figure 22. SC120 Drive Numbering  
About the SCv2000/SCv2020 Storage System  
27  
   
2
Install the Storage Center Hardware  
Prepare for the installation, mount the equipment in a rack, and install the disks.  
Unpack and Inventory the Storage Center Equipment  
Unpack the storage system and identify the items in your shipment.  
Figure 23. SCv2000/SCv2020 Storage System Components  
1.  
Documentation  
Rack rails  
2.  
4.  
Storage system  
Front bezel  
3.  
Prepare the Installation Environment  
Make sure that the environment is ready for Storage Center installation.  
Rack Space: There must be sufficient space in the rack to accommodate the storage system chassis,  
expansion enclosures, and switches.  
Power: Power must be available in the rack, and the power delivery system must meet the  
requirements of the Storage Center.  
Connectivity: The rack must be wired for connectivity to the management network and any networks  
that carry front-end IO from the Storage Center to servers.  
Safety Precautions  
Always follow these safety precautions to avoid injury and damage to Storage Center equipment.  
If equipment described in the document is used in a manner not specified by Dell, the protection  
provided by the equipment may be impaired. For your safety and protection, observe the rules described  
in the following sections.  
Install the Storage Center Hardware  
28  
       
NOTE: See the safety and regulatory information that shipped with each Storage Center  
component. Warranty information may be included within this document or as a separate  
document.  
Installation Safety Precautions  
Follow these safety precautions:  
Dell recommends that only individuals with rack-mounting experience install an SCv2000/SCv2020  
storage system in a rack.  
Make sure the storage system is fully grounded at all times to prevent damage from electrostatic  
discharge.  
When handling the storage system hardware, you should use an electrostatic wrist guard (not  
included) or a similar form of protection.  
The storage system chassis MUST be mounted in a rack; the following safety requirements must be  
considered when doing so:  
The rack construction must be capable of supporting the total weight of the installed chassis and the  
design should incorporate stabilizing features suitable to prevent the rack tipping or being pushed  
over during installation or in normal use.  
To avoid danger of the rack toppling over, do not slide more than one chassis out of the rack at a  
time.  
The rack design should take into consideration the maximum operating ambient temperature for the  
unit, which is 57°C.  
Electrical Safety Precautions  
Always follow electrical safety precautions to avoid injury and damage to Storage Center equipment.  
WARNING: Disconnect power from the storage system when removing or installing components  
that are not hot-swappable. When disconnecting power, first power down the storage system  
using the Dell Storage Client and then unplug the power cords from all the power supplies in the  
storage system.  
Provide a suitable power source with electrical overload protection. All Storage Center components  
must be grounded before applying power. Make sure that there is a safe electrical earth connection to  
power supply cords. Check the grounding before applying power.  
The plugs on the power supply cords are used as the main disconnect device. Make sure that the  
socket outlets are located near the equipment and are easily accessible.  
Know the locations of the equipment power switches and the room's emergency power-off switch,  
disconnection switch, or electrical outlet.  
Do not work alone when working with high-voltage components.  
Use rubber mats specifically designed as electrical insulators.  
Do not remove covers from the power supply unit. Disconnect the power connection before  
removing a power supply from the storage system.  
Do not remove a faulty power supply unless you have a replacement model of the correct type ready  
for insertion. A faulty power supply must be replaced with a fully operational module power supply  
within 24 hours.  
Unplug the storage system chassis before you move it or if you think it has become damaged in any  
way. When powered by multiple AC sources, disconnect all supply power for complete isolation.  
Install the Storage Center Hardware  
29  
   
Electrostatic Discharge Precautions  
Always follow electrostatic discharge (ESD) precautions to avoid injury and damage to Storage Center  
equipment.  
Electrostatic discharge (ESD) is generated by two objects with different electrical charges coming into  
contact with each other. The resulting electrical discharge can damage electronic components and  
printed circuit boards. Follow these guidelines to protect your equipment from ESD:  
Dell recommends that you always use a static mat and static strap while working on components in  
the interior of the storage system chassis.  
Observe all conventional ESD precautions when handling plug-in modules and components.  
Use a suitable ESD wrist or ankle strap.  
Avoid contact with backplane components and module connectors.  
Keep all components and printed circuit boards (PCBs) in their antistatic bags until ready for use.  
General Safety Precautions  
Always follow general safety precautions to avoid injury and damage to Storage Center equipment.  
Keep the area around the storage system chassis clean and free of clutter.  
Place any system components that have been removed away from the storage system chassis or on a  
table so that they are not in the way of foot traffic.  
While working on the storage system chassis, do not wear loose clothing such as neckties and  
unbuttoned shirt sleeves, which can come into contact with electrical circuits or be pulled into a  
cooling fan.  
Remove any jewelry or metal objects from your body because they are excellent metal conductors  
that can create short circuits and harm you if they come into contact with printed circuit boards or  
areas where power is present.  
Do not lift a storage system chassis by the handles of the power supply units (PSUs). They are not  
designed to hold the weight of the entire chassis, and the chassis cover may become bent.  
Before moving a storage system chassis, remove the PSUs to minimize weight.  
Do not remove drives until you are ready to replace them.  
NOTE: To ensure proper storage system cooling, hard drive blanks must be installed in any hard  
drive slot that is not occupied.  
Install the Storage System in a Rack  
Install the storage system and other Storage Center system components in a rack.  
About this task  
Mount the storage system and expansion enclosures in a manner that allows for expansion in the rack  
and prevents the rack from becoming topheavy.  
Steps  
1. Secure the rails that are pre-attached to both sides of the storage system chassis.  
a. Lift the locking tab on the rail.  
b. Push the rail towards the back of the chassis until it locks in place.  
2. Determine where to mount the storage system and mark the location at the front and rear of the  
rack.  
Install the Storage Center Hardware  
30  
     
NOTE: The storage system and expansion enclosures each require 2U of rack space for  
installation.  
3. Position a rail at the marked location and extend the rail to fit the rack.  
4. Insert the two rail pins into the pin holes.  
Figure 24. Hole Locations in Rack  
1. Pin hole  
3. Pin hole  
2. Rack mounting screw hole  
5. Insert a screw into the rack mounting screw hole and tighten the screw to secure the rail to the rack.  
6. Repeat the previous steps for the second rail.  
7. Slide the storage system chassis onto the rails.  
Figure 25. Mount the SCv2000/SCv2020 Storage System Chassis  
8. Secure the storage system chassis to the rack using the mounting screws within each chassis ear.  
a. Lift the latch on each chassis ear to access the screws.  
b. Tighten the screws to secure the chassis into the rack.  
c. Close the latch on each chassis ear.  
9. If the Storage Center system includes expansion enclosures, mount the expansion enclosures in the  
rack. See the instructions included with the expansion enclosure for detailed steps.  
Install the Storage Center Hardware  
31  
3
Front-End Cabling  
Front-end cabling refers to the connections between the storage system and external devices such as  
host servers or another Storage Center.  
Frontend connections can be made using Fibre Channel, iSCSI, or SAS interfaces. Dell recommends  
connecting the storage system to host servers using the most redundant option available.  
Types of Redundancy for Front-End Connections  
Front-end redundancy is achieved by eliminating single points of failure that could cause a server to lose  
connectivity to Storage Center.  
Depending on how Storage Center is cabled and configured, the following types of redundancy are  
available.  
Table 1. Redundancy and Failover Behavior  
Redundancy  
Failover Behavior  
Port redundancy  
If a single port becomes unavailable, the port can fail over to another  
available port in the same fault domain.  
Storage controller  
redundancy  
If a storage controller becomes unavailable, the ports on the offline storage  
controller can move to the available storage controller.  
Asymmetric Logical Unit If a storage controller becomes unavailable, all of the standby paths on the  
Access (ALUA)  
other storage controller become active.  
Multipath IO (MPIO)  
When multiple paths are available from a server to a storage system, a server  
configured for multipath IO can use multiple paths for IO. If a path becomes  
unavailable, the server continues to use the remaining available paths.  
Port Redundancy  
To allow for port redundancy, two front-end ports on a storage controller must be connected to the  
same switch or server.  
Fault domains group front-end ports that are connected to the same network. Ports that belong to the  
same fault domain can fail over to each other because they have the same connectivity.  
If a port becomes unavailable because it is disconnected or there is a hardware failure, the port moves  
over to another port in the same fault domain.  
Front-End Cabling  
32  
     
Storage Controller Redundancy  
To allow for storage controller redundancy, a front-end port on each storage controller must be  
connected to the same switch or server.  
If a storage controller becomes unavailable, the front-end ports on the offline storage controller move  
over to the ports (in the same fault domain) on available storage controller.  
Multipath IO  
MPIO allows a server to use multiple paths for IO if they are available.  
MPIO software offers redundancy at the path level. MPIO typically operates in a round-robin manner by  
sending packets first down one path and then the other. If a path becomes unavailable, MPIO software  
continues to send packets down the functioning path. MPIO is required to enable redundancy for severs  
connected to a Storage Center with SAS front-end connectivity.  
NOTE: MPIO is operating-system specific and it loads as a driver on the server or it is part of the  
server operating system.  
MPIO Behavior  
The server must have at least two FC, iSCSI, or SAS ports to use MPIO.  
When MPIO is configured, a server can send IO to multiple ports on the same storage controller.  
MPIO Configuration Instructions for Host Servers  
To use MPIO, configure MPIO on the host server.  
If a Dell Storage Client wizard is used to configure host server access to Storage Center, the Dell Storage  
Client attempts to automatically configure MPIO with best practices.  
NOTE: Compare the host server settings applied by the Dell Storage Client wizard against the latest  
Dell Storage Center Best Practices document located on the Dell TechCenter (http://  
Table 2. MPIO Configuration Documents  
Operating System  
Document with MPIO Instructions  
Linux  
Dell Compellent Storage Center Linux Best Practices  
Red Hat Enterprise Linux (RHEL) 6x Best Practices  
Dell Compellent Best Practices: Storage Center with SUSE Linux  
Enterprise Server 11  
VMware vSphere 5.x  
Dell Compellent Storage Center Best Practices with vSphere 5.x  
Windows Server 2008,  
Dell Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices  
2008 R2, 2012, and 2012 Guide  
R2  
To manually configure MPIO on a host server, see the Dell Best Practices document that corresponds to  
the server operating system. Depending on the operating system, you may need to install MPIO software  
or configure server options.  
Front-End Cabling  
33  
   
Cabling SAN-Attached Host Servers  
An SCv2000/SCv2020 storage system with Fibre Channel or iSCSI front-end ports connects to host  
servers through Fibre Channel or Ethernet switches.  
A storage system with Fibre Channel front-end ports connects to one or more FC switches, which  
connect to one or more host servers.  
A storage system with iSCSI front-end ports connects to one or more Ethernet switches, which  
connect to one or more host servers.  
Connecting to Fibre Channel Host Servers  
Choose the Fibre Channel connectivity option that best suits the frontend redundancy requirements and  
network infrastructure.  
Preparing Host Servers  
Install the Fibre Channel host bus adapters (HBAs), install the drivers, and make sure that the latest  
supported firmware is installed.  
About this task  
NOTE: Refer to the Dell Storage Compatibility Matrix for a list of supported Fibre Channel HBAs.  
Steps  
1. Install Fibre Channel HBAs in the host servers.  
NOTE: Do not install Fibre Channel HBAs from different vendors in the same server.  
2. Install supported drivers for the HBAs and make sure that the HBAs have the latest supported  
firmware  
3. Use the Fibre Channel cabling diagrams to cable the host servers to the switches. Connecting host  
servers directly to the storage system without using Fibre Channel switches is not supported.  
Two Fibre Channel Fabrics with Dual 16 Gb 2–PortStorage Controllers  
Use two Fibre Channel (FC) fabrics to prevent an unavailable port, switch, or storage controller from  
causing a loss of connectivity between host servers and a storage system with dual 16 Gb 2–port storage  
controllers  
About this task  
In this configuration, there are two fault domains, two FC fabrics, and two FC switches. The storage  
controllers connect to each FC switch using one FC connection.  
If a physical port or FC switch becomes unavailable, the storage system is accessed from the switch in  
the other fault domain.  
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to  
the physical ports on the other storage controller.  
Steps  
1. Connect each server to both FC fabrics.  
2. Connect fault domain 1 (shown in orange) to fabric 1.  
Storage controller 1: port 1 to FC switch 1  
Storage controller 2: port 1 to FC switch 1  
3. Connect fault domain 2 (shown in blue) to fabric 2.  
Front-End Cabling  
34  
   
Storage controller 1: port 2 to FC switch 2  
Storage controller 2: port 2 to FC switch 2  
Example  
Figure 26. Storage System with Dual 16 Gb Storage Controllers and Two FC Switches  
1.  
3.  
5.  
7.  
Server 1  
2.  
4.  
6.  
Server 2  
FC switch 1 (Fault Domain 1)  
Storage system  
FC switch 2 (Fault Domain 2)  
Storage controller 1  
Storage controller 2  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Two Fibre Channel Fabrics with Dual 8 Gb 4-Port Storage Controllers  
Use two Fibre Channel (FC) fabrics to prevent an unavailable port, switch, or storage controller from  
causing a loss of connectivity between host servers and a storage system with dual 8 Gb 4-port storage  
controllers.  
About this task  
In this configuration, there are two fault domains, two FC fabrics, and two FC switches. The storage  
controllers connect to each FC switch using two FC connections.  
If a physical port becomes unavailable, the virtual port moves to another physical port in the same  
fault domain on the same storage controller.  
If an FC switch becomes unavailable, the storage system is accessed from the switch in the other fault  
domain.  
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to  
the physical ports on the other storage controller.  
Front-End Cabling  
35  
Steps  
1. Connect each server to both FC fabrics.  
2. Connect fault domain 1 (shown in orange) to fabric 1.  
Storage controller 1: port 1 to FC switch 1  
Storage controller 1: port 3 to FC switch 1  
Storage controller 2: port 1 to FC switch 1  
Storage controller 2: port 3 to FC switch 1  
3. Connect fault domain 2 (shown in blue) to fabric 2.  
Storage controller 1: port 2 to FC switch 2  
Storage controller 1: port 4 to FC switch 2  
Storage controller 2: port 2 to FC switch 2  
Storage controller 2: port 4 to FC switch 2  
Example  
Figure 27. Storage System with Dual 8 Gb Storage Controllers and Two FC Switches  
1.  
3.  
5.  
7.  
Server 1  
2.  
4.  
6.  
Server 2  
FC switch 1 (Fault domain 1)  
Storage system  
FC switch 2 (Fault domain 2)  
Storage controller 1  
Storage controller 2  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Front-End Cabling  
36  
One Fibre Channel Fabric with Dual 16 Gb 2–Port Storage Controllers  
Use one Fibre Channel (FC) fabric to prevent an unavailable port or storage controller from causing a loss  
of connectivity between the host servers and a storage system with dual 16 Gb 2–port storage  
controllers.  
About this task  
In this configuration, there are two fault domains, one fabric, and one FC switch. Each storage controller  
connects to the FC switch using two FC connections.  
If a physical port becomes unavailable, the storage system is accessed from another port on the FC  
switch.  
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to  
physical ports on the other storage controller.  
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity  
between the host servers and storage system.  
Steps  
1. Connect each server to the FC fabric.  
2. Connect fault domain 1 (shown in orange) to the fabric.  
Storage controller 1: port 1 to the FC switch  
Storage controller 2: port 1 to the FC switch  
3. Connect fault domain 2 (shown in blue) to the fabric.  
Storage controller 1: port 2 to the FC switch  
Storage controller 2: port 2 to the FC switch  
Example  
Figure 28. Storage System with Dual 16 Gb Storage Controllers and One FC Switch  
1.  
3.  
5.  
Server 1  
FC switch (Fault domain 1 and fault domain 2) 4.  
Storage controller 1 6.  
2.  
Server 2  
Storage system  
Storage controller 2  
Front-End Cabling  
37  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
One Fibre Channel Fabric with Dual 8 Gb 4–Port Storage Controllers  
Use one Fibre Channel (FC) fabric to prevent an unavailable port or storage controller from causing a loss  
of connectivity between the host servers a storage system with dual 8 Gb 4–portstorage controllers.  
About this task  
In this configuration, there are two fault domains, one fabric, and one FC switch. Each storage controller  
connects to the FC switch using four FC connections.  
If a physical port becomes unavailable, the virtual port moves to another physical port in the same  
fault domain on the same storage controller.  
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to  
the physical ports on the other storage controller.  
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity  
between the host servers and storage system.  
Steps  
1. Connect each server to the FC fabric.  
2. Connect fault domain 1 (shown in orange) to the fabric.  
Storage controller 1: port 1 to the FC switch  
Storage controller 1: port 3 to the FC switch  
Storage controller 2: port 1 to the FC switch  
Storage controller 2: port 3 to the FC switch  
3. Connect fault domain 2 (shown in blue) to the fabric.  
Storage controller 1: port 2 to the FC switch  
Storage controller 1: port 4 to the FC switch  
Storage controller 2: port 2 to the FC switch  
Storage controller 2: port 4 to the FC switch  
Front-End Cabling  
38  
Example  
Figure 29. Storage System with Dual 8 Gb Storage Controllers and One FC Switch  
1.  
3.  
5.  
Server 1  
FC switch (Fault domain 1 and fault domain 2) 4.  
Storage controller 1 6.  
2.  
Server 2  
Storage system  
Storage controller 2  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Two Fibre Channel Fabrics with a Single 16 Gb 2–Port Storage Controller  
Use two Fibre Channel (FC) fabrics to prevent an unavailable port or switch from causing a loss of  
connectivity between the host servers and a storage system with a single 16 Gb 4–port storage  
controller.  
About this task  
In this configuration, there are two fault domains, two FC fabrics, and two FC switches. The storage  
controller connects to each FC switch using one FC connection.  
If a physical port or FC switch becomes unavailable, the storage system is accessed from the switch in  
the other fault domain.  
NOTE: This configuration is vulnerable to storage controller unavailability, which results in a loss of  
connectivity between the host servers and storage system.  
Steps  
1. Connect each server to the FC fabric.  
2. Connect fault domain 1 (shown in orange) to the fabric 1.  
Front-End Cabling  
39  
Storage controller: port 1 to FC switch 1.  
3. Connect fault domain 2 (shown in blue) to the fabric 2.  
Storage controller: port 2 to FC switch 2.  
Example  
Figure 30. Storage System a Single 16 Gb Storage Controller and Two FC Switches  
1.  
3.  
5.  
Server 1  
2.  
4.  
6.  
Server 2  
FC switch 1 (Fault domain 1)  
Storage system  
FC switch 2 (Fault domain 2)  
Storage controller  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Two Fibre Channel Fabrics with a Single 8 Gb 4–Port Storage Controller  
Use two Fibre Channel (FC) fabrics to prevent an unavailable port or switch from causing a loss of  
connectivity between the host servers and a storage system with a single 8 Gb 4–port storage controller.  
About this task  
In this configuration, there are two fault domains, two FC fabrics, and two FC switches. The storage  
controller connects to each FC switch using two FC connections.  
If a physical port becomes unavailable, the virtual port moves to another physical port in the same  
fault domain on the storage controller.  
If an FC switch becomes unavailable, the storage system is accessed from the switch in the other fault  
domain.  
NOTE: This configuration is vulnerable to storage controller unavailability, which results in a loss of  
connectivity between the host servers and storage system.  
Front-End Cabling  
40  
Steps  
1. Connect each server to both FC fabrics.  
2. Connect fault domain 1 (shown in orange) to fabric 1.  
Storage controller 1: port 1 FC switch 1  
Storage controller 1: port 3 FC switch 1  
3. Connect fault domain 2 (shown in blue) to fabric 2.  
Storage controller 1: port 2 FC switch 2  
Storage controller 1: port 4 FC switch 2  
Example  
Figure 31. Storage System a Single 8 Gb Storage Controller and Two FC Switches  
1.  
3.  
5.  
Server 1  
2.  
4.  
6.  
Server 2  
FC switch 1 (Fault domain 1)  
Storage system  
FC switch 2 (Fault domain 2)  
Storage controller 1  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Front-End Cabling  
41  
Using SFP+ Transceiver Modules  
An SCv2000/SCv2020 storage system with 16 Gb Fibre Channel storage controllers comes with short  
range small-form-factor pluggable (SFP+) transceiver modules.  
Figure 32. SFP+ Transceiver Module with a Bail Clasp Latch  
The SFP+ transceiver modules are installed into the ports of the SCv2000/SCv2020 storage controller.  
Fiber-optic cables are connected from the SFP+ transceiver modules in the SCv2000/SCv2020 to SFP+  
transceiver modules in Fibre Channel switches.  
Guidelines for Using SFP+ Transceiver Modules  
The SCv2000/SCv2020 storage system supports the use of SFP+ transceiver modules for 16 Gb FC  
connectivity.  
Before installing SFP+ transceiver modules and fiber-optic cables, read the following guidelines:  
CAUTION: When handling static-sensitive devices, take precautions to avoid damaging the  
product from static electricity.  
Use only Dell supported SFP+ transceiver modules with the SCv2000/SCv2020. Other generic SFP+  
transceiver modules are not supported and may not work with the SCv2000/SCv2020.  
The SFP+ transceiver module housing has an integral guide key that is designed to prevent you from  
inserting the transceiver module incorrectly.  
Use minimal pressure when inserting an SFP+ transceiver module into an FC port. Forcing the SFP+  
transceiver module into a port may damage the transceiver module or the port.  
The SFP+ transceiver module must be installed into a port before you connect the fiber-optic cable.  
The fiber-optic cable must be removed from the SFP+ transceiver module before you remove the  
transceiver module from the port.  
Install an SFP+ Transceiver Module  
Complete the following steps to install an SFP+ transceiver module in a 16 Gb FC storage controller.  
About this task  
Read the following cautions and information before installing an SFP+ transceiver module.  
Front-End Cabling  
42  
WARNING: To reduce the risk of injury from laser radiation or damage to the equipment, observe  
the following precautions:  
Do not open any panels, operate controls, make adjustments, or perform procedures to a laser  
device other than those specified herein.  
Do not stare into the laser beam.  
CAUTION: Transceiver modules can be damaged by electrostatic discharge (ESD). To prevent ESD  
damage to the transceiver module, take the following precautions:  
Wear an antistatic discharge strap while handling transceiver modules.  
Place transceiver modules in antistatic packing material when transporting or storing them.  
Steps  
1. Position the transceiver module so that the key is oriented correctly to the port in the storage  
controller.  
Figure 33. Install the SFP+ Transceiver Module  
1. SFP+ transceiver module  
2. Fiber-optic cable connector  
2. Insert the transceiver module into the port until it is firmly seated and the latching mechanism clicks.  
The transceiver modules are keyed so that they can only be inserted with the correct orientation. If a  
transceiver module does not slide in easily, ensure that it is correctly oriented.  
CAUTION: To reduce the risk of damage to the equipment, do not use excessive force when  
inserting the transceiver module.  
3. Position fiber-optic cable so that the key (the ridge on one side of the cable connector) is aligned  
with the slot in the transceiver module.  
CAUTION: Touching the end of a fiber-optic cable damages the cable. Whenever a fiber-  
optic cable is not connected, replace the protective covers on the ends of the cable.  
4. Insert the fiber-optic cable into the transceiver module until the latching mechanism clicks.  
5. Insert the other end of the fiber-optic cable into the SFP+ transceiver module of a Fibre Channel  
switch.  
Remove an SFP+ Transceiver Module  
Complete the following steps to remove an SFP+ transceiver module from a 16 Gb FC storage controller.  
Prerequisites  
Use failover testing to make sure that the connection between host servers and Storage Center remains  
up if the port is disconnected.  
Front-End Cabling  
43  
About this task  
Read the following cautions and information before beginning removal or replacement procedures.  
WARNING: To reduce the risk of injury from laser radiation or damage to the equipment, observe  
the following precautions:  
Do not open any panels, operate controls, make adjustments, or perform procedures to a laser  
device other than those specified herein.  
Do not stare into the laser beam.  
CAUTION: Transceiver modules can be damaged by electrostatic discharge (ESD). To prevent ESD  
damage to the transceiver module, take the following precautions:  
Wear an antistatic discharge strap while handling modules.  
Place modules in antistatic packing material when transporting or storing them.  
Steps  
1. Remove the fiber-optic cable that is inserted into the transceiver.  
a. Make certain the fiber-optic cable is labeled before removing it.  
b. Press the release clip on the bottom of the cable connector to remove the fiber-optic cable from  
the transceiver.  
CAUTION: Touching the end of a fiber-optic cable damages the cable. Whenever a fiber-  
optic cable is not connected, replace the protective covers on the ends of the cables.  
2. Open the transceiver module latching mechanism.  
3. Grasp the bail clasp latch on the transceiver module and pull the latch out and down to eject the  
transceiver module from the socket.  
4. Slide the transceiver module out of the port.  
Figure 34. Remove the SFP+ Transceiver Module  
1. SFP+ transceiver module  
2. Fiber-optic cable connector  
Fibre Channel Zoning  
When using Fibre Channel for front-end connectivity, zones must be established to ensure that storage is  
visible to the servers. Use the zoning concepts discussed in this section to plan the front-end connectivity  
before starting to cable the storage system.  
Zoning can be applied to either the ports on switches or to the World Wide Names (WWNs) of the end  
devices.  
Front-End Cabling  
44  
 
Dell recommends creating zones using a single initiator host port and multiple Storage Center ports.  
WWN Zoning Guidelines  
When WWN zoning is configured, a device may reside on any port or change physical ports and still be  
visible because the switch is seeking a WWN.  
List of guidelines for WWN zoning.  
Include all Storage Center virtual WWNs in a single zone.  
Include all Storage Center physical WWNs in a single zone.  
For each host server HBA port, create a zone that includes the single HBA WWN and multiple Storage  
Center virtual WWNs on the same switch.  
For Fibre Channel replication:  
Include all Storage Center physical WWNs from Storage Center system A and Storage Center  
system B in a single zone.  
Include all Storage Center physical WWNs of Storage Center system A and the virtual WWNs of  
Storage Center system B on the particular fabric.  
Include all Storage Center physical WWNs of Storage Center system B and the virtual WWNs of  
Storage Center system A on the particular fabric.  
NOTE: Some ports may not be used or dedicated for replication, however ports that are used  
must be in these zones.  
Port Zoning Guidelines  
When port zoning is configured, only specific switch ports are visible. If a storage device is moved to a  
different switch port that is not part of the zone, it is no longer visible to the other ports in the zone.  
List of port zoning guidelines.  
Include all Storage Center front-end ports.  
For each host server port, create a zone that includes a single server HBA port and all Storage Center  
ports.  
Create server zones which contain all Storage Center front-end ports and a single server port.  
For Fibre Channel replication, include all Storage Center front-end ports from Storage Center system  
A and Storage Center system B in a single zone.  
Labeling the Front-End Cables  
Label the front-end cables to indicate the storage controller and port to which they are connected.  
Prerequisites  
Locate the pre-made front-end cable labels that shipped with the storage system.  
About this task  
Apply cable labels to both ends of each cable that connects a storage controller to a front-end fabric or  
network, or directly to host servers.  
Steps  
1. Starting with the top edge of the label, attach the label to the cable near the connector.  
Front-End Cabling  
45  
Figure 35. Attach Label to Cable  
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so  
that it does not obscure the text.  
Figure 36. Wrap Label Around Cable  
3. Apply a matching label to the other end of the cable.  
Connecting to iSCSI Host Servers  
Choose the iSCSI connectivity option that best suits the frontend redundancy requirements and network  
infrastructure.  
Preparing Host Servers  
Install the iSCSI host bus adapters (HBAs) or iSCSI network adapters, install the drivers, and make sure that  
the latest supported firmware is installed.  
NOTE: Refer to the Dell Storage Compatibility Matrix for a list of supported iSCSI HBAs and network  
adapters.  
Front-End Cabling  
46  
 
If the host server is a Windows or Linux host:  
a. Install the iSCSI HBAs or network adapters dedicated for iSCSI traffic in the host servers.  
NOTE: Do not install iSCSI HBAs or network adapters from different vendors in the same  
server.  
b. Install supported drivers for the HBAs or network adapters and make sure that the HBAs or  
network adapter have the latest supported firmware  
c. Use the host operating system to assign IP addresses for each iSCSI port. The IP addresses must  
match the subnets for each fault domain.  
CAUTION: Correctly assign IP addresses to the HBAs or network adapters. Assigning IP  
addresses to the wrong ports can cause connectivity issues.  
NOTE: If using jumbo frames, they must be enabled and configured on all devices in the data  
path, adapter ports, switches, and storage system.  
d. Use the iSCSI cabling diagrams to cable the host servers to the switches. Connecting host servers  
directly to the storage system without using Ethernet switches is not supported.  
If the host server is a vSphere host:  
a. Install the iSCSI HBAs or network adapters dedicated for iSCSI traffic in the host servers.  
b. Install supported drivers for the HBAs or network adapters and make sure that the HBAs or  
network adapter have the latest supported firmware  
c. If the host uses network adapters for iSCSI traffic, create a VMkernel port for each network  
adapter. (1 VMkernel per vSwitch)  
d. Use the host operating system to assign IP addresses for each iSCSI port. The IP addresses must  
match the subnets for each fault domain..  
CAUTION: Correctly assign IP addresses to the HBAs or network adapters. Assigning IP  
addresses to the wrong ports can cause connectivity issues.  
NOTE: If using jumbo frames, they must be enabled and configured on all devices in the data  
path, adapter ports, switches, and storage system.  
e. If the host uses network adapters for iSCSI traffic, add the VMkernel ports to the iSCSI software  
initiator.  
f. Use the iSCSI cabling diagrams to cable the host servers to the switches. Connecting host servers  
directly to the storage system without using Ethernet switches is not supported.  
Two iSCSI Networks with Dual 10 GbE 2–Port Storage Controllers  
Use two iSCSI networks to prevent an unavailable port, switch, or storage controller from causing a loss  
of connectivity between the host servers and a storage system with dual 10 GbE 2–port storage  
controllers.  
About this task  
In this configuration, there are two fault domains, two iSCSI networks, and two Ethernet switches. The  
storage controllers connect to each Ethernet switch using one iSCSI connection.  
If a physical port or Ethernet switch becomes unavailable, the storage system is accessed from the  
switch in the other fault domain.  
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to  
the physical ports on the other storage controller.  
Steps  
1. Connect each server to both iSCSI networks.  
2. Connect fault domain 1 (shown in orange) to iSCSI network 1.  
Storage controller 1: port 1 to Ethernet switch 1  
Front-End Cabling  
47  
Storage controller 2: port 1 to Ethernet switch 1  
3. Connect fault domain 2 (shown in blue) to iSCSI network 2.  
Storage controller 1: port 2 to Ethernet switch 2  
Storage controller 2: port 2 to Ethernet switch 2  
Example  
Figure 37. Storage System with Dual 10 GbE Storage Controllers and Two Ethernet Switches  
1.  
3.  
5.  
7.  
Server 1  
2.  
4.  
6.  
Server 2  
Ethernet switch 1 (Fault domain 1)  
Storage system  
Ethernet switch 2 (Fault domain 2)  
Storage controller 1  
Storage controller 2  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Two iSCSI Networks with Dual 1 GbE 4–Port Storage Controllers  
Use two iSCSI networks to prevent an unavailable port, switch, or storage controller from causing a loss  
of connectivity between the host servers and a storage system with dual 1 GbE 4–port storage  
controllers.  
About this task  
In this configuration, there are two fault domains, two iSCSI networks, and two Ethernet switches. The  
storage controllers connect to each Ethernet switch using two iSCSI connections.  
If a physical port becomes unavailable, the virtual port moves to another physical port in the same  
fault domain on the same storage controller.  
If an Ethernet switch becomes unavailable, the storage system is accessed from the switch in the  
other fault domain.  
Front-End Cabling  
48  
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to  
physical ports on the other storage controller.  
Steps  
1. Connect each server to both iSCSI networks.  
2. Connect fault domain 1 (shown in orange) to iSCSI network 1.  
Storage controller 1: port 1 to Ethernet switch 1  
Storage controller 2: port 1 to Ethernet switch 1  
Storage controller 1: port 3 to Ethernet switch 1  
Storage controller 2: port 3 to Ethernet switch 1  
3. Connect fault domain 2 (shown in blue) to iSCSI network 2.  
Storage controller 1: port 2 to Ethernet switch 2  
Storage controller 2: port 2 to Ethernet switch 2  
Storage controller 1: port 4 to Ethernet switch 2  
Storage controller 2: port 4 to Ethernet switch 2  
Example  
Figure 38. Storage System with Dual 1 GbE Storage Controllers and Two Ethernet Switches  
1.  
3.  
5.  
7.  
Server 1  
2.  
4.  
6.  
Server 2  
Ethernet switch 1 (Fault domain 1)  
Storage system  
Ethernet switch 2 (Fault domain 2)  
Storage controller 1  
Storage controller 2  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Front-End Cabling  
49  
One iSCSI Network with Dual 10 GbE 2–Port Storage Controllers  
Use one iSCSI network to prevent an unavailable port or storage controller from causing a loss of  
connectivity between the host servers and a storage system with dual 10 GbE 2–Port storage controllers .  
About this task  
In this configuration, there are two fault domains, one iSCSI network, and one Ethernet switch. Each  
storage controller connects to the Ethernet switch using two iSCSI connections.  
If a physical port becomes unavailable, the storage system is accessed from another port on the  
Ethernet switch.  
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to  
the physical ports on the other storage controller.  
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity  
between the host servers and storage system.  
Steps  
1. Connect each server to the iSCSI network.  
2. Connect fault domain 1 (shown in orange) to the iSCSI network.  
Storage controller 1: port 1 to the Ethernet switch  
Storage controller 2: port 1 to the Ethernet switch  
3. Connect fault domain 2 (shown in blue) to the iSCSI network.  
Storage controller 1: port 2 to the Ethernet switch  
Storage controller 2: port 2 to the Ethernet switch  
Example  
Figure 39. Storage System with Dual 10 GbE Storage Controllers and One Ethernet Switch  
1.  
Server 1  
2.  
4.  
Server 2  
3.  
Ethernet switch (Fault domain 1 and fault  
domain 2)  
Storage system  
5.  
Storage controller 1  
6.  
Storage controller 2  
Front-End Cabling  
50  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
One iSCSI Network with Dual 1 GbE 4–PortStorage Controllers  
Use one iSCSI network to prevent an unavailable port or storage controller from causing a loss of  
connectivity between the host servers and a storage system with dual 1 GbE 4–port storage controllers.  
About this task  
In this configuration, there are two fault domains, one iSCSI network, and one Ethernet switch. Each  
storage controller connects to the Ethernet switch using four iSCSI connections.  
If a physical port becomes unavailable, the virtual port moves to another physical port in the same  
fault domain on the same storage controller.  
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to  
the physical ports on the other storage controller.  
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity  
between the host servers and storage system.  
Steps  
1. Connect each server to the iSCSI network.  
2. Connect fault domain 1 (shown in orange) to the iSCSI network.  
Storage controller 1: port 1 to the Ethernet switch  
Storage controller 1: port 3 to the Ethernet switch  
Storage controller 2: port 1 to the Ethernet switch  
Storage controller 2: port 3 to the Ethernet switch  
3. Connect fault domain 2 (shown in blue) to the iSCSI network.  
Storage controller 1: port 2 to the Ethernet switch  
Storage controller 1: port 4 to the Ethernet switch  
Storage controller 2: port 2 to the Ethernet switch  
Storage controller 2: port 4 to the Ethernet switch  
Front-End Cabling  
51  
Example  
Figure 40. Storage System with Dual 1 GbE Storage Controllers and One Ethernet Switch  
1.  
Server 1  
2.  
4.  
Server 2  
3.  
Ethernet switch (Fault domain 1 and fault  
domain 2)  
Storage system  
5.  
Storage controller 1  
6.  
Storage controller 2  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Two iSCSI Networks with a Single 10 GbE 2–Port Storage Controller  
Use two iSCSI networks to prevent an unavailable port or switch from causing a loss of connectivity  
between the host servers and a storage system with a single 10 GbE 2–port storage controller.  
About this task  
In this configuration, there are two fault domains, two iSCSI networks, and two Ethernet switches. The  
storage controller connects to the Ethernet switches using two iSCSI connections.  
If a physical port or Ethernet switch becomes unavailable, the storage system is accessed from the switch  
in the other fault domain.  
NOTE: This configuration is vulnerable to storage controller unavailability, which results in a loss of  
connectivity between the host servers and storage system.  
Steps  
1. Connect each server to the iSCSI network.  
2. Connect the fault domain 1 (shown in orange) to iSCSI network 1.  
Front-End Cabling  
52  
Storage controller: port 1 to Ethernet switch 1  
3. Connect the fault domain 2 (shown in orange) to iSCSI network 2.  
Storage controller: port 2 to Ethernet switch 2  
Example  
Figure 41. Storage System with One 10 GbE Storage Controller and Two Ethernet Switches  
1.  
3.  
5.  
Server 1  
2.  
4.  
6.  
Server 2  
Ethernet switch 1 (Fault domain 1)  
Storage system  
Ethernet switch 2 (Fault domain 2)  
Storage controller  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Two iSCSI Networks with a Single 1 GbE 4–Port Storage Controller  
Use two iSCSI networks to prevent an unavailable port or switch from causing a loss of connectivity  
between the host servers and a storage system with a single 1 GbE 4–port storage controller.  
About this task  
In this configuration, there are two fault domains, two iSCSI networks, and two Ethernet switches. The  
storage controller connects to each Ethernet switch using two iSCSI connections.  
If a physical port becomes unavailable, the virtual port moves to another physical port in the same  
fault domain on the storage controller.  
If an Ethernet switch becomes unavailable, the storage system is accessed from the switch in the  
other fault domain.  
NOTE: This configuration is vulnerable to storage controller unavailability, which results in a loss of  
connectivity between the host servers and storage system.  
Front-End Cabling  
53  
Steps  
1. Connect each server to both iSCSI networks.  
2. Connect fault domain 1 (shown in orange) to iSCSI network 1.  
Storage controller 1: port 1 to Ethernet switch 1  
Storage controller 1: port 3 to Ethernet switch 1  
3. Connect fault domain 2 (shown in blue) to iSCSI network 2.  
Storage controller 1: port 2 to Ethernet switch 2  
Storage controller 1: port 4 to Ethernet switch 2  
Example  
Figure 42. Storage System with One 1 GbE Storage Controller and Two Ethernet Switches  
1.  
3.  
5.  
Server 1  
2.  
4.  
6.  
Server 2  
Ethernet switch 1 (Fault domain 1)  
Storage system  
Ethernet switch 2 (Fault domain 2)  
Storage controller  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Labeling the Front-End Cables  
Label the front-end cables to indicate the storage controller and port to which they are connected.  
Prerequisites  
Locate the pre-made front-end cable labels that shipped with the storage system.  
About this task  
Apply cable labels to both ends of each cable that connects a storage controller to a front-end fabric or  
network, or directly to host servers.  
Steps  
1. Starting with the top edge of the label, attach the label to the cable near the connector.  
Front-End Cabling  
54  
Figure 43. Attach Label to Cable  
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so  
that it does not obscure the text  
Figure 44. Wrap Label Around Cable  
3. Apply a matching label to the other end of the cable.  
Cabling Direct-Attached Host Servers  
An SCv2000/SCv2020 storage system with SAS front-end ports connects directly to host servers.  
Preparing Host Servers  
On each host server, install the SAS host bus adapters (HBAs), install the drivers, and make sure that the  
latest supported firmware is installed.  
About this task  
NOTE: Refer to the Dell Storage Compatibility Matrix for a list of supported SAS HBAs.  
Steps  
1. Install the SAS HBAs in the host servers.  
Front-End Cabling  
55  
   
NOTE: Do not install SAS HBAs from different vendors in the same server.  
2. Install supported drivers for the HBAs and make sure that the HBAs have the latest supported  
firmware installed.  
3. Use the SAS cabling diagram to cable the host servers directly to the storage system.  
NOTE: If deploying vSphere hosts, configure only one host at a time.  
SAS Virtual Port Mode  
To provide redundancy in SAS virtual port mode, the front-end ports on each storage controller must be  
directly connected the server.  
In SAS virtual port mode, a volume is active on only one storage controller, but it is visible to both storage  
controllers. Asymmetric Logical Unit Access (ALUA) controls the path that a server uses to access a  
volume.  
If a storage controller becomes unavailable, the volume becomes active on the other storage controller.  
The state of the paths on the available storage controller are set to Active/Optimized and the state of the  
paths on the other storage controller are set to Standby. When the storage controller becomes available  
again and the ports are rebalanced, the volume moves back to its preferred storage controller and the  
ALUA states are updated.  
If a SAS path becomes unavailable, the Active/Optimized volumes on that path become active on the  
other storage controller. The state of the failed path for those volumes is set to Standby and the state of  
the active path for those volumes is set to Active/Optimized.  
NOTE: Failover in SAS virtual port mode occurs within a single fault domain. Therefore, a server  
must have both connections in the same fault domain. For example, if a server is connected to SAS  
port 2 on one storage controller, it must be connected to SAS port two on the other storage  
controller. If a server is not cabled correctly when a storage controller or SAS path becomes  
unavailable, access to the volume is lost.  
Two Servers Connected to Dual 12 Gb 4–Port SAS Storage Controllers  
A storage system with four 12 Gb front-end SAS ports on each storage controller can connect up to two  
host servers, if each host server has two SAS HBAs with dual SAS ports.  
About this task  
In this configuration, there are four fault domains spread across both storage controllers. The storage  
controllers are connected to each host server using four SAS connections.  
If a storage controller becomes unavailable, all of the standby paths on the other storage controller  
become active.  
Steps  
1. Connect fault domain 1 (shown in orange) to the host server 1.  
a. Connect a SAS cable from storage controller 1: port 1 to host server 1.  
b. Connect a SAS cable from storage controller 2: port 1 to host server 1.  
2. Connect fault domain 2 (shown in blue) to the host server 1.  
a. Connect a SAS cable from storage controller 1: port 2 to host server 1.  
b. Connect a SAS cable from storage controller 2: port 2 to host server 1.  
3. Connect fault domain 3 (shown in gray) to the host server 2.  
a. Connect a SAS cable from storage controller 1: port 3 to host server 2.  
Front-End Cabling  
56  
   
b. Connect a SAS cable from storage controller 2: port 3 to host server 2.  
4. Connect fault domain 4 (shown in red) to the host server 2.  
a. Connect a SAS cable from storage controller 1: port 4 to host server 2.  
b. Connect a SAS cable from storage controller 2: port 4 to host server 2.  
Example  
Figure 45. Storage System with Dual 12 Gb SAS Storage Controllers Connected to Two Host Servers  
1.  
3.  
5.  
Server 1  
2.  
4.  
Server 2  
Storage system  
Storage controller 2  
Storage controller 1  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Four Servers Connected to Dual 12 Gb 4–Port SAS Storage Controllers  
A storage system with four 12 Gb front-end SAS ports on each storage controller can connect to up to  
four host servers, if each host server has one HBA with dual SAS ports.  
About this task  
In this configuration, four fault domains are spread across both storage controllers. The storage  
controllers are connected to each host server using two SAS connections.  
If a storage controller becomes unavailable, all of the standby paths on the other storage controller  
become active.  
Steps  
1. Connect fault domain 1 (shown in orange) to the host server 1.  
Front-End Cabling  
57  
 
a. Connect a SAS cable from storage controller 1: port 1 to host server 1.  
b. Connect a SAS cable from storage controller 2: port 1 to host server 1.  
2. Connect fault domain 2 (shown in blue) to the host server 2.  
a. Connect a SAS cable from storage controller 1: port 2 to host server 2.  
b. Connect a SAS cable from storage controller 2: port 2 to host server 2.  
3. Connect fault domain 3 (shown in gray) to the host server 3.  
a. Connect a SAS cable from storage controller 1: port 3 to host server 3.  
b. Connect a SAS cable from storage controller 2: port 3 to host server 3.  
4. Connect fault domain 4 (shown in red) to the host server 4.  
a. Connect a SAS cable from storage controller 1: port 4 to host server 4.  
b. Connect a SAS cable from storage controller 2: port 4 to host server 4.  
Example  
Figure 46. Storage System with Dual 12 Gb SAS Storage Controllers Connected to Four Host Servers  
1.  
3.  
5.  
7.  
Server 1  
2.  
4.  
6.  
Server 2  
Server 3  
Server 4  
Storage system  
Storage controller 2  
Storage controller 1  
Next steps  
Install or enable MPIO on the host servers.  
Front-End Cabling  
58  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Two Servers Connected to a Single 12 Gb 4–Port SAS Storage Controller  
A storage system with four 12 Gb front-end SAS ports on a single storage controller can connect up to  
two host servers, if each host server has two SAS HBAs  
About this task  
In this configuration, there are four fault domains and the storage controller is connected to each host  
server using two SAS connections.  
NOTE: This configuration is vulnerable to storage controller unavailability, which results in a loss of  
connectivity between the host servers and storage system.  
Steps  
1. Connect fault domain 1 to the host server 1, by connecting a SAS cable from storage controller 1:  
port 1 to host server 1.  
2. Connect fault domain 2 to the host server 1, by connecting a SAS cable from storage controller 1:  
port 2 to host server 1.  
3. Connect fault domain 3 to the host server 2, by connecting a SAS cable from storage controller 1:  
port 3 to host server 2.  
4. Connect fault domain 4 to the host server 2, by connecting a SAS cable from storage controller 1:  
port 4 to host server 2.  
Example  
Figure 47. Storage System with One 12 Gb SAS Storage Controller Connected to Two Host Servers  
1.  
Server 1  
2.  
4.  
Server 2  
3.  
Storage system  
Storage controller  
Next steps  
Install or enable MPIO on the host servers.  
Front-End Cabling  
59  
 
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/  
Labeling the Front-End Cables  
Label the front-end cables to indicate the storage controller and port to which they are connected.  
Prerequisites  
Locate the pre-made front-end cable labels that shipped with the storage system.  
About this task  
Apply cable labels to both ends of each cable that connects a storage controller to a front-end fabric or  
network, or directly to host servers.  
Steps  
1. Starting with the top edge of the label, attach the label to the cable near the connector.  
Figure 48. Attach Label to Cable  
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so  
that it does not obscure the text.  
Figure 49. Wrap Label Around Cable  
3. Apply a matching label to the other end of the cable.  
Front-End Cabling  
60  
 
Cabling the Ethernet Management Port  
To manage Storage Center, the Ethernet management (MGMT) port of each storage controller must be  
connected to an Ethernet switch that is part of the management network.  
About this task  
The management port provides access to the storage system through the Dell Storage Client software  
and is used to send emails, alerts, SNMP traps, and SupportAssist diagnostic data. The management port  
also provides access to the baseboard management controller (BMC) software.  
NOTE: If the Flex Port license is installed, the management port becomes a shared iSCSI port. To  
use the management port as an iSCSI port, it must be cabled to a network switch dedicated to iSCSI  
traffic. Special considerations must be taken into account when sharing the management port.  
Steps  
1. Connect the Ethernet management port on storage controller 1 to the Ethernet switch.  
2. Connect the Ethernet management port on storage controller 2 to the Ethernet switch.  
Figure 50. Storage System Connected to a Management Network  
1. Ethernet switch  
2. Storage system  
3. Storage controller 1  
4. Storage controller 2  
Labeling the Ethernet Management Cables  
Label the Ethernet management cables that connect each storage controller to an Ethernet switch.  
Prerequisites  
Locate the pre-made Ethernet management cable labels that shipped with the SCv2000/SCv2020  
storage system.  
About this task  
Apply cable labels to both ends of each Ethernet management cable.  
Steps  
1. Starting with the top edge of the label, attach the label to the cable near the connector.  
Front-End Cabling  
61  
   
Figure 51. Attach Label to Cable  
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so  
that it does not obscure the text.  
Figure 52. Wrap Label Around Cable  
3. Apply a matching label to the other end of the cable.  
Cabling the Embedded Ports for iSCSI Replication  
If the Storage Center is licensed for replication, the replication port can be connected to an Ethernet  
switch and used for iSCSI replication. If the Storage Center is licensed for replication and the Flex Port  
license is installed, the management port and replication port can be connected to an Ethernet switch  
and used for iSCSI replication.  
Cabling the Replication Port for iSCSI Replication  
If replication is licensed, the replication (REPL) port can be used to replicate data to another Storage  
Center.  
About this task  
Connect the replication port of each storage controller to an Ethernet switch through which the Storage  
Center can perform iSCSI replication.  
Front-End Cabling  
62  
   
Steps  
1. Connect the replication port on storage controller 1 to Ethernet switch 2.  
2. Connect the replication port on storage controller 2 to Ethernet switch 2.  
NOTE: The management port on each storage controller is connected to an Ethernet switch on  
the management network.  
Figure 53. Replication Ports Connected to an iSCSI Network  
1. Ethernet switch 1 (Management network)  
3. Storage system  
2. Ethernet switch 2 (iSCSI network)  
4. Storage controller 1  
5. Storage controller 2  
3. To configure the fault domain and ports, click the Configure Embedded iSCSI Ports link on the  
Configuration Complete page of the Discover and Configure Uninitialized SCv2000 Series Storage  
Centers wizard.  
4. To configure replication, refer to the Dell Enterprise Manager Administrator’s Guide.  
Related Links  
Cabling the Management Port and Replication Port for iSCSI Replication  
If replication is licensed and the Flex Port license is installed, both the management (MGMT) and  
replication (REPL) ports can be used to replicate data to another Storage Center.  
About this task  
Connect the management port and replication port on each storage controller to an Ethernet switch  
through which the Storage Center can perform replication.  
Steps  
1. Connect Flex Port Domain 1 (shown in orange) to the iSCSI network.  
a. Connect the management port on storage controller 1 to the Ethernet switch.  
b. Connect the management port on storage controller 2 to the Ethernet switch.  
2. Connect iSCSI Embedded Domain 2 (shown in blue) to the iSCSI network.  
a. Connect the replication port on storage controller 1 to the Ethernet switch.  
b. Connect the replication port on storage controller 2 to the Ethernet switch.  
Front-End Cabling  
63  
 
Figure 54. Management and Replication Ports Connected to an iSCSI Network  
1. Ethernet switch (iSCSI network)  
3. Storage controller 1  
2. Storage system  
4. Storage controller 2  
3. To configure the fault domains and ports, click the Configure Embedded iSCSI Ports link on the  
Configuration Complete page of the Discover and Configure Uninitialized SCv2000 Series Storage  
Centers wizard.  
4. To configure replication, refer to the Dell Enterprise Manager Administrator’s Guide.  
Related Links  
Cabling the Embedded Ports for iSCSI Host Connectivity  
If the Flex Port license is installed on the Storage Center, the management port and replication port can  
be connected to an Ethernet switch and used for iSCSI host connectivity.  
Two iSCSI Networks with Dual Storage Controllers and Embedded Ethernet  
Ports  
Use two iSCSI networks to prevent an unavailable port, switch, or storage controller from causing a loss  
of connectivity between the host servers and a storage system with dual storage controllers.  
About this task  
In this configuration, there are two fault domains, two iSCSI networks, and two Ethernet switches. The  
storage controllers connect to each Ethernet switch using one iSCSI connection.  
If a physical port or Ethernet switch becomes unavailable, the storage system is accessed from the  
switch in the other fault domain.  
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to  
the physical ports on the other storage controller.  
Steps  
1. Connect each server to both iSCSI networks.  
2. Connect embedded fault domain 1 (shown in orange) to iSCSI network 1.  
a. Connect the management port on storage controller 1 to Ethernet switch 1.  
b. Connect the management port on storage controller 2 to Ethernet switch 1.  
3. Connect embedded fault domain 2 (shown in blue) to iSCSI network 2.  
a. Connect the replication port on storage controller 1 to Ethernet switch 2.  
b. Connect the replication port on storage controller 2 to Ethernet switch 2.  
Front-End Cabling  
64  
   
Figure 55. Storage System with Dual Storage Controllers and Two Ethernet Switches  
1. Server 1  
2. Server 2  
3. Ethernet switch 1 (Fault domain 1)  
5. Storage system  
4. Ethernet switch 2 (Fault domain 2)  
6. Storage controller 1  
7. Storage controller 2  
4. To configure the fault domains and ports, click the Configure Embedded iSCSI Ports link on the  
Configuration Complete page of the Discover and Configure Uninitialized SCv2000 Series Storage  
Centers wizard.  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices documents located on the Dell TechCenter (http://en.community.dell.com/  
Related Links  
One iSCSI Network with Dual Storage Controllers and Embedded Ethernet  
Ports  
Use one iSCSI network to prevent an unavailable port or storage controller from causing a loss of  
connectivity between the host servers and a storage system with dual storage controllers .  
About this task  
In this configuration, there are two fault domains, one iSCSI network, and one Ethernet switch. Each  
storage controller connects to the Ethernet switch using two iSCSI connections.  
If a physical port becomes unavailable, the storage system is accessed from another port on the  
Ethernet switch.  
If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to  
the physical ports on the other storage controller.  
Front-End Cabling  
65  
 
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity  
between the host servers and storage system.  
Steps  
1. Connect each server to the iSCSI network.  
2. Connect embedded fault domain 1 (shown in orange) to the iSCSI network.  
a. Connect the management port on storage controller 1 to the Ethernet switch  
b. Connect the management port on storage controller 2 to the Ethernet switch  
3. Connect embedded fault domain 2 (shown in blue) to the iSCSI network.  
a. Connect the replication port on storage controller 1 to the Ethernet switch  
b. Connect the replication port on storage controller 2 to the Ethernet switch  
Figure 56. Storage System with Dual Storage Controllers and One Ethernet Switch  
1. Server 1  
2. Server 2  
3. Ethernet switch (Fault domain 1 and fault  
domain 2)  
4. Storage system  
5. Storage controller 1  
6. Storage controller 2  
4. To configure the fault domains and ports, click the Configure Embedded iSCSI Ports link on the  
Configuration Complete page of the Discover and Configure Uninitialized SCv2000 Series Storage  
Centers wizard.  
Next steps  
Install or enable MPIO on the host servers.  
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure  
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage  
Center Best Practices documents located on the Dell TechCenter (http://en.community.dell.com/  
Related Links  
Front-End Cabling  
66  
4
Back-End Cabling and Connecting Power  
Back-end cabling refers to the connections between the storage system and expansion enclosures. After  
the back-end cabling is complete, connect power cables to the storage system components and turn on  
the hardware.  
An SCv2000/SCv2020 storage system can be deployed with or without expansion enclosures.  
When an SCv2000/SCv2020 is deployed without expansion enclosures, the storage controllers must  
be interconnected using SAS cables. This connection enables SAS path redundancy between the  
storage controllers and the internal disks.  
When an SCv2000/SCv2020 is deployed with expansion enclosures, the expansion enclosures  
connect to the SAS ports on the storage controllers.  
SC100/SC120 Expansion Enclosure Cabling Guidelines  
You can connect multiple SC100/SC120 expansion enclosures to an SCv2000/SCv2020 by cabling the  
expansion enclosures in series.  
The connection between a storage system and the expansion enclosures is referred to as a SAS chain. A  
SAS chain is made up of two paths, which are referred to as the A side and B side. Each side of the SAS  
chain starts at the initiator only SAS port on one storage controller and ends at the initiator/target SAS  
port the other storage controller.  
SAS Redundancy  
Use redundant SAS cabling to make sure that an unavailable IO port or storage controller does not cause  
a Storage Center outage.  
If an IO port or storage controller becomes unavailable, the Storage Center IO moves to the redundant  
path.  
SAS Port Types  
There are two types of back-end SAS ports on each storage controller: initiator only and initiator/target.  
The ports labeled A are initiator only ports and the ports labeled B are initiator/target ports.  
Figure 57. SCv2000/SCv2020 SAS Ports  
1.  
Storage system  
2.  
Storage controller 1  
Back-End Cabling and Connecting Power  
67  
       
3.  
5.  
Initiator only ports (ports A)  
Storage controller 2  
4.  
Initiator/target ports (ports B)  
Back-End Connections for an SCv2000/SCv2020 without  
Expansion Enclosures  
When you deploy an SCv2000/SCv2020 without expansion enclosures, you must interconnect the  
storage controllers using SAS cables.  
About this task  
NOTE: The top storage controller is storage controller 1 and the bottom storage controller is  
storage controller 2.  
Steps  
1. Connect a SAS cable between storage controller 1: port A and storage controller 2: port B.  
2. Connect a SAS cable between storage controller 1: port B and storage controller 2: port A.  
Example  
Figure 58. Interconnected Storage Controllers  
1.  
SCv2000/SCv2020 storage system  
Storage controller 2  
2.  
Storage controller 1  
3.  
Back-End Connections for an SCv2000/SCv2020 with  
Expansion Enclosures  
This section shows common cabling between the SCv2000/SCv2020 and SC100/SC120 expansion  
enclosures. Locate the scenario that most closely matches the Storage Center that you are configuring  
and follow the instructions, modifying them as necessary.  
The SCv2000/SCv2020 supports up to eight SC100 expansion enclosures, up to four SC120 expansion  
enclosures, or any combination of SC100/SC120 expansion enclosures as long as the total disk count  
does not exceed 168 disks.  
SC100/SC120 expansion enclosures chains are cabled as follows.  
Side A (Orange): Expansion enclosures are connected from port B to port A, using the top EMMs.  
Side B (Orange): Expansion enclosures are connected from port A to port B, using the bottom EMMs.  
Back-End Cabling and Connecting Power  
68  
   
SCv2000/SCv2020 and One SC100/SC120 Expansion Enclosure  
This figure shows an SCv2000/SCv2020 cabled to one expansion enclosure forming a single chain.  
Figure 59. SCv2000/SCv2020 and One SC100/SC120 Expansion Enclosure  
1.  
Storage system  
2.  
4.  
Storage controller 1  
Expansion enclosure  
3.  
Storage controller 2  
Table 3. Storage System Connected to One Expansion Enclosure  
Path  
Connections  
Chain 1: A Side (Orange)  
1. Storage controller 1: port A to the expansion enclosure: top EMM, port A.  
2. Expansion enclosure: top EMM, port B to storage controller 2: port B.  
Chain 1: B Side (Blue)  
1. Storage controller 2: port A to the expansion enclosure: bottom EMM,  
port A.  
2. Expansion enclosure: bottom EMM, port B to storage controller 1: port  
B.  
Back-End Cabling and Connecting Power  
69  
 
SCv2000/SCv2020 and Two or More SC100/SC120 Expansion Enclosures  
This figure shows an SCv2000/SCv2020 cabled to two expansion enclosures forming a single chain.  
Figure 60. SCv2000/SCv2020 and Two SC100/SC120 Expansion Enclosures  
1.  
3.  
5.  
Storage system  
2.  
4.  
Storage controller 1  
Storage controller 2  
Expansion enclosure 2  
Expansion enclosure 1  
To connect additional expansion enclosures, cable the expansion enclosures in series. Cable the top  
EMM, port B from last enclosure in the chain to the top EMM, port A of the enclosure to add. Then, cable  
the bottom EMM, port B from last enclosure in the chain to the bottom EMM, port A of the enclosure to  
add.  
Table 4. Connected to Two Expansion Enclosures  
Path  
Connections  
Chain 1: A Side (Orange)  
1. Storage controller 1: port A to expansion enclosure 1: top EMM, port A.  
2. Expansion enclosure 1: top EMM, port B to expansion enclosure 2: top  
EMM, port A.  
3. Expansion enclosure 2: top EMM, port B to storage controller 2: port B.  
Chain 1: B Side (Blue)  
1. Storage controller 2: port A to expansion enclosure 1: bottom EMM, port  
A.  
2. Expansion enclosure 1: bottom EMM, port B to expansion enclosure 2:  
bottom EMM, port A.  
3. Expansion enclosure 2: bottom EMM, port B to storage controller 1: port  
B.  
Back-End Cabling and Connecting Power  
70  
 
Label the Back-End Cables  
Label the back-end cables that interconnect the storage controllers or label the back-end cables that  
connect the storage system to the expansion enclosures.  
Prerequisites  
Locate the pre-made cable labels provided with the expansion enclosures.  
About this task  
Apply cable labels to both ends of each SAS cable to indicate the chain number and side (A or B).  
Steps  
1. Starting with the top edge of the label, attach the label to the cable near the connector.  
Figure 61. Attach Label to Cable  
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so  
that it does not obscure the text.  
Figure 62. Wrap Label Around Cable  
3. Apply a matching label to the other end of the cable.  
Back-End Cabling and Connecting Power  
71  
 
Connect Power Cables and Turn on the Storage System  
Connect power cables to the storage system components and turn on the hardware.  
About this task  
If the storage system is installed without expansion enclosures, connect power cables to the storage  
system chassis and turn on the storage system.  
If the storage system is installed with expansion enclosures, connect power cables to the expansion  
enclosure chassis and turn on the expansion enclosures as described in the Dell Storage Center  
SC100/SC120 Expansion Enclosure Getting Started Guide. After the expansion enclosures are  
powered on, connect power to the storage system chassis and turn on the storage system.  
Steps  
1. Ensure that the power switches are in the OFF position before connecting the power cables.  
2. Connect the power cables to both power supply/cooling fan modules in the storage system chassis  
and secure the power cables firmly to the brackets using the straps provided.  
Figure 63. Connect the Power Cables  
3. Plug the other end of the power cables into a grounded electrical outlet or a separate power source  
such as an uninterrupted power supply (UPS) or a power distribution unit (PDU).  
4. Press both power switches on the rear of the storage system chassis to turn on the storage system.  
Figure 64. Turn on the Storage System  
When the SCv2000/SCv2020 storage system is powered on, there is a delay while the storage  
system prepares to start up. During the first minute, the only indication that the storage system is  
Back-End Cabling and Connecting Power  
72  
 
powered on are the LEDs on the storage controllers. After the one-minute delay, the fans and LEDs  
turn on as an indication that the storage controllers are starting up.  
5. Wait until the green diagnostic LEDs on the back of each storage controller match the pattern shown  
in Figure 65. Storage Controller Diagnostic LEDs. This pattern indicates that a storage controller has  
started up successfully.  
NOTE: It may take up to five minutes for a storage controller to completely start up.  
Figure 65. Storage Controller Diagnostic LEDs  
Back-End Cabling and Connecting Power  
73  
 
5
Discover and Configure the Storage  
Center  
The Discover and Configure Uninitialized SCv2000 Series Storage Centers wizard allows you to set up a  
Storage Center to make it ready for volume creation.  
Use the Dell Storage Client to discover and configure the Storage Center. After configuring a Storage  
Center, you can set up a localhost, VMware vSphere host, or VMware vCenter host using the host setup  
wizards.  
The storage system hardware must be installed and cabled before the Storage Center can be configured.  
Worksheet to Record System Information  
Use the following worksheet to record the information that is needed to install the SCv2000/SCv2020  
storage system.  
Storage Center Information  
Gather and record the following information about the Storage Center network and the administrator  
user.  
Table 5. Storage Center Network  
Service Tag  
________________  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
________________  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
Management IPv4 address (Storage Center management address)  
Top Controller IPv4 address (Controller 1 MGMT port)  
Bottom Controller IPv4 address (Controller 2 MGMT port)  
Subnet mask  
Gateway IPv4 address  
Domain name  
DNS server address  
Secondary DNS server address  
Table 6. Storage Center Administrator  
Password for the default Storage Center “Admin” user  
Email address of the default Storage Center “Admin” user  
________________  
________________  
Discover and Configure the Storage Center  
74  
     
iSCSI Fault Domain Information  
For a storage system with iSCSI front-end ports, gather and record network information for the iSCSI fault  
domains. This information is needed to complete the Discover and Configure Uninitialized SCv2000  
Series Storage Centers wizard.  
NOTE: For a storage system deployed with two Ethernet switches, Dell recommends setting up  
each fault domain on separate subnets.  
Table 7. iSCSI Fault Domain 1  
Target IPv4 address  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
Subnet mask  
Gateway IPv4 address  
IPv4 address for storage controller module 1: port 1  
IPv4 address for storage controller module 2: port 1  
(Four port I/O card only) IPv4 address for storage controller module 1:  
port 3  
(Four port I/O card only) IPv4 address for storage controller module 2:  
port 3  
___ . ___ . ___ . ___  
Table 8. iSCSI Fault Domain 2  
Target IPv4 address  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
Subnet mask  
Gateway IPv4 address  
IPv4 address for storage controller module 1: port 2  
IPv4 address for storage controller module 2: port 2  
(Four port I/O card only) IPv4 address for storage controller module 1:  
port 4  
(Four port I/O card only) IPv4 address for storage controller module 2:  
port 4  
___ . ___ . ___ . ___  
Additional Storage Center Information  
The Network Time Protocol (NTP) and Simple Mail Transfer Protocol (SMTP) server information is  
optional. The proxy server information is also optional, but it may be required to complete the Discover  
and Configure Uninitialized SCv2000 Series Storage Centers wizard.  
Table 9. NTP, SMTP, and Proxy Servers  
NTP server IPv4 address  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
___ . ___ . ___ . ___  
SMTP server IPv4 address  
Backup SMTP server IPv4 address  
Discover and Configure the Storage Center  
75  
   
SMTP server login ID  
________________  
________________  
___ . ___ . ___ . ___  
SMTP server password  
Proxy server IPv4 address  
Fibre Channel Zoning Information  
For a storage system with Fibre Channel front-end ports, record the physical and virtual WWNs of the  
Fibre Channel ports in Fault Domain 1 and Fault Domain 2. This information is displayed on the Review  
Front-End page of the Discover and Configure Uninitialized SCv2000 Series Storage Centers wizard.  
Use this information to configure zoning on each Fibre Channel switch.  
Table 10. Physical WWNs in Fault Domain 1  
Physical WWN of storage controller 1: port 1  
________________  
________________  
________________  
________________  
Physical WWN of storage controller 2: port 1  
(Four port I/O card only) Physical WWN of storage controller 1: port 3  
(Four port I/O card only) Physical WWN of storage controller 2: port 3  
Table 11. Virtual WWNs in Fault Domain 1  
Virtual WWN of storage controller 1: port 1  
________________  
________________  
________________  
________________  
Virtual WWN of storage controller 2: port 1  
(Four port I/O card only) Virtual WWN of storage controller 1: port 3  
(Four port I/O card only) Virtual WWN of storage controller 2: port 3  
Table 12. Physical WWNs in Fault Domain 2  
Physical WWN of storage controller 1: port 2  
________________  
________________  
________________  
________________  
Physical WWN of storage controller 2: port 2  
(Four port I/O card only) Physical WWN of storage controller 1: port 4  
(Four port I/O card only) Physical WWN of storage controller 2: port 4  
Table 13. Virtual WWNs in Fault Domain 2  
Virtual WWN of storage controller 1: port 2  
________________  
________________  
________________  
________________  
Virtual WWN of storage controller 2: port 2  
(Four port I/O card only) Virtual WWN of storage controller 1: port 4  
(Four port I/O card only) Virtual WWN of storage controller 2: port 4  
Discover and Configure the Storage Center  
76  
 
Locating Your Service Tag  
Your storage system is identified by a unique Service Tag number.  
You can find the Service Tag number on the service luggage tag located next to the front panel display.  
The Service Tag number is also located on the back of the storage system chassis. Dell uses this  
information to route support calls to the appropriate personnel.  
Supported Operating Systems for Storage Center  
Automated Setup  
Setting up a Storage Center using the Discover and Configure Uninitialized SCv2000 Series Storage  
Centers wizard and the Host Setup wizards requires 64–bit versions of the following operating systems.  
Red Hat Enterprise Linux 6 or later  
SUSE Linux Enterprise 12 or later  
Windows Server 2008 R2 or later  
Install and Use the Dell Storage Client  
You must start the Dell Storage Client as an Administrator to run the Discover and Configure Uninitialized  
SCv2000 Series Storage Centers wizard.  
1. Go to www.dell.com/support and download the Dell Enterprise Manager software.  
2. Install the Enterprise Manager Client on the host server.  
To discover and configure a Storage Center, the software must be installed on a host server that is on  
the same subnet as the storage system.  
3. To start the software on a Windows computer, double-click the Enterprise Manager Client shortcut.  
To start the software on a Linux computer, execute ./Clientfrom the var/lib/dell/bin  
directory.  
4. Click Discover and Configure Uninitialized SCv2000 Series Storage Centers. The Discover and  
Configure Uninitialized SCv2000 Series Storage Centers wizard appears.  
Discover and Select an Uninitialized Storage Center  
The first page of the Discover and Configure Uninitalized SCv2000 Series Storage Centers wizard  
provides a list of prerequisite actions and information required before setting up a Storage Center.  
Prerequisites  
Make sure that the Storage Center hardware is physically attached to all necessary networks and  
powered on.  
The host server, on which the Dell Storage Client software is installed, must be on the same subnet or  
VLAN as the Storage Center.  
Layer 2 multicast must be allowed on the network.  
Steps  
1. Make sure that you have the required information that is listed on the first page of the wizard. This  
information is needed to configure the Storage Center.  
2. Click Next. The Select a Storage Center to Initialize page appears and lists the uninitialized Storage  
Centers discovered by the wizard.  
Discover and Configure the Storage Center  
77  
       
NOTE: If the wizard does not discover the Storage Center that you want to initialize, perform  
one of the following actions:  
Ensure that the Storage Center is physically attached to all necessary networks and powered  
on. Then click Rediscover.  
Click Troubleshoot Storage Center Hardware Issue to learn more about reasons why the  
Storage Center is not discoverable.  
Click Manually Discover Storage Center via MAC Address to enter the MAC address for the  
Storage Center.  
3. Select the Storage Center to initialize.  
4. (Optional) Click Enable Storage Center Indicator to turn on the indicator light for the selected  
Storage Center. You can use the indicator to verify that you have selected the correct Storage  
Center.  
5. Click Next.  
6. If the Storage Center is partially configured, the Storage Center login pane appears. Enter the  
management IPv4 address and the Admin password for the Storage Center, then click Next to  
continue.  
Set System Information  
The Set System Information page allows you to enter Storage Center and controller configuration  
information to use when connecting to the Storage Center using Dell Storage Client.  
1. Enter a descriptive name for the Storage Center in the Storage Center Name field.  
2. Enter the system management IPv4 address for the Storage Center in the Management IPv4 Address  
field. The Management IPv4 Address is the IP address used to manage the Storage Center and is  
different than a controller IPv4 address.  
3. Enter an IPv4 address for the management port of each controller.  
NOTE: The controller IPv4 addresses and Management IPv4 Address must be within the same  
subnet.  
4. Enter the subnet mask of the management network in the Subnet Mask field.  
5. Enter the gateway address of the management network in the Gateway IPv4 Address field.  
6. Enter the domain name of the management network in the Domain Name field.  
7. Enter the DNS server addresses of the management network in the DNS Server and Secondary DNS  
Server fields.  
8. Click Next. The Set Administration Information page appears.  
Set Administrator Information  
The Set Administrator Information page allows you to set a new password and an email address for the  
Admin user.  
1. Enter a new password for the default Storage Center administrator user in the New Admin Password  
and Confirm Password fields.  
2. Enter the email address of the default Storage Center administrator user in the Admin Email Address  
field.  
3. Click Next.  
For a Fibre Channel or SAS storage system, the Confirm Configuration page appears.  
For an iSCSI storage system, the Configure iSCSI Fault Domains page appears.  
Discover and Configure the Storage Center  
78  
   
Configure iSCSI Fault Domains  
For a Storage Center with iSCSI front-end ports, use the Configure iSCSI Fault Domains page and the  
Fault Domain pages to enter network information for the fault domains and ports.  
1. (Optional) On the Configure iSCSI Fault Domains page, click More information about fault domains  
or How to set up an iSCSI network to learn more about these topics.  
2. Click Next.  
NOTE: If there are down iSCSI ports, a dialog box appears that allows you to unconfigure down  
iSCSI ports. Unconfiguring the down iSCSI ports will prevent unnecessary alerts.  
3. On the Configure iSCSI HBA Fault Domain 1 page, enter network information for the fault domain  
and its ports.  
NOTE: Make sure that all the IP addresses for iSCSI Fault Domain 1 are in the same subnet.  
4. Click Next.  
5. On the Configure iSCSI HBA Fault Domain 2 page, enter network information for the fault domain  
and its ports. Then click Next.  
NOTE: Make sure that all the IP addresses for iSCSI Fault Domain 2 are in the same subnet.  
6. Click Next.  
Confirm the Storage Center Configuration  
Make sure that the configuration information shown on the Confirm Configuration page is correct before  
continuing.  
1. Verify that the Storage Center settings are correct.  
For an iSCSI storage system, verify that the iSCSI Fault Domain settings are correct.  
To copy the configuration information to the clipboard so it can be pasted into a document, click  
Copy to clipboard.  
2. If the configuration information is correct, click Apply Configuration.  
If the configuration information is incorrect, click Back and enter the correct information.  
NOTE: After the Apply Configuration button is clicked, the configuration cannot be changed  
until after the Storage Center is fully configured.  
Initialize the Storage Center  
The Storage Center sets up the system using the information provided on the previous pages.  
1. The Storage Center performs system setup tasks. The Initialize Storage Center page displays the  
status of the system setup tasks.  
To learn more about the initialization process, click More information about Initialization.  
If one or more of the system setup tasks fails, click Troubleshoot Initialization Error to learn how  
to resolve the issue.  
If the Configuring Disks task fails, click View Disks to see the status of the disks detected by the  
Storage Center.  
Discover and Configure the Storage Center  
79  
     
If any of the Storage Center front-end ports are down, the Storage Center Front-End Ports  
Down dialog box appears. Select the ports that are not connected to the storage network, and  
then click OK.  
2. When all of the Storage Center setup tasks are complete, click Next.  
Review Fibre Channel Front-End Configuration  
For a Storage Center with Fibre Channel front-end ports, the Fault Domains page displays an example  
fault domain topology based on the number of controllers and type of front-end ports. The Review  
Front-End Configuration page displays information about the fault domains created by the Storage  
Center.  
1. (Optional) On the Fault Domains page, click More information about fault domains to learn more  
about fault domains.  
2. Click Next.  
3. On the Review Front-End Configuration page, make sure that the information about the fault  
domains is correct.  
4. Using the information provided on the Review Front-End Configuration page, configure Fibre  
Channel zoning to create the physical and virtual zones described in Fibre Channel Zoning.  
5. Click Next.  
Review SAS Front-End Configuration  
For a Storage Center with SAS front-end ports, the Fault Domains page displays an example fault domain  
topology based on the number of controllers and type of front-end ports. The Review Front-End  
Configuration page displays information about the fault domains created by the Storage Center.  
1. (Optional) On the Fault Domains page, click More information about fault domains to learn more  
about fault domains.  
2. Click Next.  
3. On the Review Front-End Configuration page, make sure that the information about the fault  
domains is correct.  
4. Click Next.  
Configure Time Settings  
Configure an NTP server to set the time automatically or set the time and date manually.  
1. From the Region and Time Zone drop-down menus, select the region and time zone used to set the  
time.  
2. Select Use NTP Server and enter the host name or IPv4 address of the NTP server, or select Set  
Current Time and set the time and date manually.  
3. Click Next.  
Configure SMTP Server Settings  
Enable SMTP email to receive information from the Storage Center about errors, warnings, and events.  
1. Place a check next to Enable SMTP Email.  
2. Enter the SMTP mail server, backup SMTP mail server, email address, and subject line information.  
Discover and Configure the Storage Center  
80  
       
3. Click Test Server to check the connectivity from the Storage Center to the SMTP server.  
4. (Optional) Place a check next to Use Authorized Login, and then enter the ID and password.  
5. Click Next.  
Configure Key Management Server Settings  
The Key Management Server Settings page appears if supported by the Storage Center license. Use this  
page to enter network settings and select the certificate files.  
1. Enter network settings for the server.  
2. Enter the user name and password.  
3. Select the SSL certificate files.  
4. Click Next.  
The Configure Ports page appears.  
Review the SupportAssist Data Collection and Storage  
Agreement  
The SupportAssist Data Collection and Storage page displays the text of the SupportAssist data  
agreement and allows you to accept or opt out of SupportAssist.  
1. To allow SupportAssist to collect diagnostic data and send this information to technical support,  
select By checking this box you accept the above terms.  
2. Click Next.  
3. If you did not select By checking this box you accept the above terms, the SupportAssist  
Recommended pane appears.  
Click No to return to the SupportAssist Data Collection and Storage page and accept the  
agreement.  
Click Yes to opt out of SupportAssist and proceed to the Update Storage Center page.  
Advantages and Benefits of Dell SupportAssist  
As an integral part for Dell’s ability to provide best of class support for your Enterprise class products, Dell  
SupportAssist proactive provides the information required to diagnose support issues, enabling the most  
efficient support possible and reducing the effort required by you.  
A few key benefits of SupportAssist are:  
Enables proactive service requests and real-time troubleshooting  
Automatic support case creation based on event alerting  
Enables ProSupport Plus and optimizes service delivery  
Automatic health checks  
Enables remote Storage Center updates  
Dell strongly recommends enabling comprehensive support service at time of incident and proactive  
service with SupportAssist.  
Discover and Configure the Storage Center  
81  
     
Provide Contact Information  
Enter contact information for technical support to use when sending support-related communications  
using Dell SupportAssist.  
1. Enter contact information.  
2. To receive SupportAssist emails, select Yes, I would like to receive emails from Dell SupportAssist  
when issues arise, including hardware failure notifications.  
3. Select the preferred contact method, language, and available times.  
4. Enter a shipping address where replacement Storage Center components can be sent.  
5. Click Next.  
Update Storage Center  
The Storage Center attempts to contact the SupportAssist Update Server to check for updates.  
If no update is available, the Storage Center Up to Date page appears. Click Next.  
If an update is available, the current and available versions are listed.  
a. Click Install to update to the latest version.  
b. If the update fails, click Retry Update to try to update again.  
c. When the update is complete, click Next.  
If the SupportAssist Data Collection and Storage Agreement was not accepted, the Storage Center  
cannot check for updates.  
To proceed without checking for an update, click Next .  
To accept the agreement and check for an update:  
a. Click Accept SupportAssist Data Collection and Storage Agreement to review the agreement.  
b. Select By checking this box you accept the above terms.  
c. Click Next. The Storage Center attempts to contact the SupportAssist Update Server to check for  
updates.  
The Setup SupportAssist Proxy Settings dialog box appears if the Storage Center cannot connect to  
the Dell SupportAssist Update server. If the site does not have direct access to the Internet but uses a  
web proxy, configure the proxy settings:  
a. Select Enabled.  
b. Enter the proxy settings.  
c. Click OK. The Storage Center attempts to contact the SupportAssist Update Server to check for  
updates.  
Complete Configuration and Perform Next Steps  
The Storage Center is now configured. The Configuration Complete page provides links to a Dell Storage  
Client tutorial and wizards to perform the next setup tasks.  
1. (Optional) Click one of the Next Steps to configure a localhost, configure a VMware host, or create a  
volume.  
When you have completed the step, you are returned to the Configuration Complete page.  
2. (Optional) Click one of the Advanced Steps to configure embedded iSCSI ports or modify BMC  
settings.  
Discover and Configure the Storage Center  
82  
     
When you have completed the step, you are returned to the Configuration Complete page.  
3. Click Finish to exit the wizard.  
Set Up a localhost or VMware Host  
After configuring a Storage Center, you can set up block level storage for a localhost, VMware vSphere  
host, or VMware vCenter.  
Set Up a localhost from Initial Setup  
Configure a localhost to access block level storage on the Storage Center.  
Prerequisites  
Client must be running on a system with a 64-bit operating system.  
The Dell Storage Client must be run by a Dell Storage Client user with the Administrator privilege.  
On a Storage Center with Fibre Channel IO ports, configure Fibre Channel zoning before starting this  
procedure.  
Steps  
1. On the Configuration Complete page of the Discover and Configure Storage Center wizard, click  
Set up block level storage for this host.  
The Set up localhost for Storage Center wizard appears.  
If the Storage Center has iSCSI ports and the host is not connected to any interface, the Log into  
Storage Center via iSCSI page appears. Select the target fault domains, and then click Log In.  
In all other cases, the Verify localhost Information page appears. Proceed to the next step.  
2. On the Verify localhost Information page, verify that the information is correct. Then click Create  
Server.  
The server definition is created on the Storage Center for the connected and partially connected  
initiators.  
3. The Host Setup Successful page displays the best practices that were set by the wizard and best  
practices that were not set. Make a note of any best practices that were not set by the wizard. It is  
recommended that these updates are applied manually before starting IO to the Storage Center.  
4. (Optional) Place a check next to Create a Volume for this host to create a volume after finishing host  
setup.  
5. Click Finish.  
Set Up a VMware vSphere Host from Initial Setup  
Configure a VMware vSphere host to access block level storage on the Storage Center.  
Prerequisites  
Client must be running on a system with a 64-bit operating system.  
The Dell Storage Client must be run by a Dell Storage Client user with the Administrator privilege.  
On a Storage Center with Fibre Channel IO ports, configure Fibre Channel zoning before starting this  
procedure.  
About this task  
NOTE: Block level storage cannot be set up for a VMware cluster on a ed Storage Center with SAS  
IO ports.  
Discover and Configure the Storage Center  
83  
     
Steps  
1. On the Configuration Complete page of the Discover and Configure Storage Center wizard, click  
Configure VMware vSpheres to access a Storage Center.  
The Set up VMware Host on Storage Center wizard appears.  
2. Enter the IP address or hostname, the user name and password. Then click Next.  
If the Storage Center has iSCSI ports and the host is not connected to any interface, the Log into  
Storage Center via iSCSI page appears. Select the target fault domains, and then click Log In.  
In all other cases, the Verify vSpheres Information page appears. Proceed to the next step.  
3. Select an available port, and then click Create Server.  
The server definition is created on the Storage Center.  
4. The Host Setup Successful page displays the best practices that were set by the wizard and best  
practices that were not set. Make a note of any best practices that were not set by the wizard. It is  
recommended that these updates are applied manually before starting IO to the Storage Center.  
5. (Optional) Place a check next to Create a Volume for this host to create a volume after finishing host  
setup.  
6. Click Finish.  
Set Up a VMware vCenter Host from Initial Setup  
Configure a VMware vCenter cluster to access block level storage on the Storage Center.  
Prerequisites  
Client must be running on a system with a 64-bit operating system.  
The Dell Storage Client must be run by a Dell Storage Client user with the Administrator privilege.  
On a Storage Center with Fibre Channel IO ports, configure Fibre Channel zoning before starting this  
procedure.  
About this task  
NOTE: Block level storage cannot be set up for a VMware vCenter on a Storage Center with SAS IO  
ports.  
Steps  
1. On the Configuration Complete page of the Discover and Configure Storage Center wizard, click  
Configure VMware vSpheres to access a Storage Center.  
The Set up VMware Host on Storage Center wizard appears.  
2. Enter the IP address or hostname, the user name and password. Then click Next.  
If the Storage Center has iSCSI ports and the host is not connected to any interface, the Log into  
Storage Center via iSCSI page appears. Select the hosts and target fault domains, and then click  
Log In.  
In all other cases, the Verify vCenters Information page appears. Proceed to the next step.  
3. Select an available port, and then click Create Servers.  
The server definition is created on the Storage Center for each of the connected or partially  
connected hosts.  
4. The Host Setup Successful page displays the best practices that were set by the wizard and best  
practices that were not set. Make a note of any best practices that were not set by the wizard. It is  
recommended that these updates are applied manually before starting IO to the Storage Center.  
5. Click Finish.  
Discover and Configure the Storage Center  
84  
 
Configure Embedded iSCSI Ports  
Configure the embedded Ethernet ports on the Storage Center for use as iSCSI ports.  
1. Configure the fault domain and ports for iSCSI Embedded Domain 1.  
a. Enter the target IPv4 address, subnet mask, and gateway for the fault domain.  
b. Enter an IPv4 address for each port in the fault domain.  
NOTE: Make sure that all the IP addresses for iSCSI Embedded Domain 1 are in the same  
subnet.  
2. If the Flex Port license is installed, configure the fault domain and ports for Flex Port Domain 1.  
a. Enter the target IPv4 address, subnet mask, and gateway for the fault domain.  
b. Enter an IPv4 address for each port in the fault domain.  
NOTE: Make sure that all the IP addresses for iSCSI Embedded Domain 1 are in the same  
subnet.  
3. Click OK.  
Discover and Configure the Storage Center  
85  
 
6
Perform Post-Setup Tasks  
Perform connectivity and failover tests to make sure that the Storage Center deployment was successful.  
Verify Connectivity and Failover  
This section describes the general steps needed to verify that Storage Center is set up properly and  
performs failover correctly.  
The process includes creating test volumes, copying data to verify connectivity, and shutting down a  
storage controller to verify failover and MPIO functionality.  
Create Test Volumes  
Connect a server to the Storage Center, create one or more test volumes, and map them to the server to  
prepare for connectivity and failover testing.  
About this task  
Steps  
1. Configure a localhost to access the Storage Center using the Set up localhost on Storage Center  
wizard.  
2. Connect to the Storage Center using the Dell Storage Client.  
3. Create a 25 GB test volume named TestVol1 and map it to the top storage controller (storage  
controller 1).  
a. Click the Storage tab.  
b. From the Storage tab navigation pane, select the Volumes node.  
c. Click Create Volume. The Create Volume wizard appears.  
d. Enter TestVol1in the Name field, then click Next.  
e. Set the volume size to 25 GB, then click Next.  
f. Select the disk folder to use, then click Next.  
g. Select the replay profile to use, then click Next.  
h. Select the server to which to map the volume and click Advanced Mapping.  
i. Clear the check box under Restrict Mapping Paths and select the controller on which to activate  
the volume.  
j. Click OK and click Next.  
k. Click Finish.  
4. Repeat the previous steps to create 25 GB volume named TestVol2 and map it to the bottom storage  
controller (storage controller 2).  
5. Partition and format the test volumes on the server.  
Perform Post-Setup Tasks  
86  
     
Test Basic Connectivity  
Verify basic connectivity by copying data to the test volumes.  
1. Connect to the server to which the volumes are mapped.  
2. Create a folder on the TestVol1 volume, copy at least 2 GB of data to the folder, and verify that the  
data copied successfully.  
3. Create a folder on the TestVol2 volume, copy at least 2 GB of data to the folder, and verify that the  
data copied successfully  
Test Storage Controller Failover  
Test the Storage Center to make sure that a storage controller failover does not interrupt IO.  
1. Connect to the server, create a Test folder on the server, and copy at least 2 GB of data into it.  
2. Restart the top storage controller while copying data to verify that the failover event does not  
interrupt IO.  
a. Copy the Test folder to the TestVol1 volume.  
b. During the copy process, restart the top storage controller (the storage controller through which  
TestVol1 is mapped) by selecting it from the Hardware tab and clicking Shutdown/Restart  
Controller.  
c. Verify that the copy process continues while the storage controller restarts.  
d. Wait several minutes and verify that the storage controller has finished restarting.  
3. Restart the bottom storage controller while copying data to verify that the failover event does not  
interrupt IO.  
a. Copy the Test folder to the TestVol2 volume.  
b. During the copy process, restart the bottom storage controller (the storage controller through  
which the TestVol2 is mapped) by selecting it from the Hardware tab and clicking Shutdown/  
Restart Controller.  
c. Verify that the copy process continues while the storage controller restarts.  
d. Wait several minutes and verify that the storage controller has finished restarting.  
Test MPIO  
Perform the following tests for a Storage Center with Fibre Channel or iSCSI front-end connectivity if the  
network environment and servers are configured for MPIO. Always perform the following tests for a  
Storage Center with SAS front-end connectivity.  
1. Create a Test folder on the server and copy at least 2GB of data into it.  
2. Make sure that the server is configured to use load balancing MPIO (round-robin).  
3. Manually disconnect a path while copying data to TestVol1 to verify that MPIO is functioning  
correctly.  
a. Copy the Test folder to the TestVol1 volume.  
b. During the copy process, disconnect one of the paths and verify that the copy process continues.  
c. Reconnect the path.  
4. Repeat the previous steps as necessary to test additional paths.  
5. Restart the storage controller that contains the active path while IO is being transferred and verify  
that the IO process continues.  
6. If the front-end connectivity of the Storage Center is Fibre Channel or iSCSI and the Storage Center  
is not in a production environment, restart the switch that contains the active path while IO is being  
transferred, and verify that the IO process continues.  
Perform Post-Setup Tasks  
87  
     
Clean up Test Volumes  
After testing is complete, delete the volumes used for testing.  
1. Connect to the server to which the volumes are mapped and remove the volumes.  
2. Connect to the Storage Center using the Dell Storage Client.  
3. Click the Storage tab.  
4. From the Storage tab navigation pane, select the Volumes node.  
5. Select the volumes to delete.  
6. Right-click on the selected volumes and select Delete. The Delete dialog box appears.  
7. Click OK  
Sending Diagnostic Data Using Dell SupportAssist  
Use Dell SupportAssist to send diagnostic data to Dell Technical Support Services.  
1. Click Send SupportAssist Data Now. The Send Support Assist Data Now dialog box appears.  
2. Select Storage Center Configuration and Detailed Logs.  
3. Click OK.  
Label SC100/SC120 Expansion Enclosures  
SC100/SC120 expansion enclosures do not have displays to indicate the expansion enclosure ID assigned  
by Storage Center.  
About this task  
To facilitate easy identification in the rack, use Dell Storage Client to match each expansion enclosure ID  
to a Service Tag. Locate the Service Tag on the back of each expansion enclosure and then label it with  
the correct expansion enclosure ID.  
NOTE: If the expansion enclosure is deleted from Storage Client and then added back in, the  
expansion enclosure is assigned a new index number, requiring a label change.  
Steps  
1. Click the Hardware tab.  
2. In the Hardware tab navigation pane, select the Enclosures node.  
3. In the right-pane, local the enclosure to label and record the Service Tag.  
4. Create a label with the enclosure ID.  
NOTE: If the name of the enclosure is Enclosure - 2, the enclosure ID is 2.  
5. Locate the expansion enclosure with the recorded Service Tag and apply the ID label to the left-front  
of the enclosure.  
Perform Post-Setup Tasks  
88  
     
A
Adding or Removing an Expansion  
Enclosure  
This section describes how to add an expansion enclosure to a storage system and how to remove an  
expansion enclosure from a storage system.  
Adding Multiple Expansion Enclosures to a Storage  
System Deployed without Expansion Enclosures  
Use caution when adding expansion enclosures to a live Storage Center system to preserve the integrity  
of the existing data.  
Prerequisites  
Install the expansion enclosures in a rack and, but do not connect the expansion enclosures to the  
storage system. For more information, see the Dell Storage Center SC100/SC120 Expansion Enclosure  
Getting Started Guide  
Steps  
1. Cable the expansion enclosures together to form a chain.  
2. Connect to the Storage Center using the Dell Storage Client.  
3. Check the disk count of the Storage Center system before adding the expansion enclosure.  
4. Click the Hardware tab and select Enclosures node in the Hardware tab navigation pane.  
5. Click Add Enclosure. The Add New Enclosure wizard starts.  
a. Click Next to validate the existing cabling.  
b. Select the enclosure type and click Next.  
c. If the drives are not installed, install the drives in the expansion enclosures.  
d. Turn on the expansion enclosure. When the drives spin up, make sure that the front panel and  
power status LEDs show normal operation.  
e. Click Next.  
f. Add the expansion enclosure to the A-side chain. Click Next to validate the cabling.  
g. Add the expansion enclosure to the B-side chain. Click Next to validate the cabling.  
h. Click Finish.  
6. To manually manage new unassigned disks:  
a. Click the Storage tab.  
b. In the Storage tab navigation pane, select the Disks node.  
c. Click Manage Unassigned Disks. The Manage Unassigned Disks dialog box appears.  
d. From the Disk Folder drop-down menu, select the disk folder for the unassigned disks.  
e. Select Perform RAID rebalance immediately.  
f. Click OK.  
For more information, see the Dell Enterprise Manager Administrator’s Guide.  
7. Label the new back-end cables.  
Related Links  
Adding or Removing an Expansion Enclosure  
89  
   
Cable the Expansion Enclosures Together  
Cable the expansion enclosures together to form a chain, but do not connect the chain to the storage  
system.  
1. Connect a SAS cable from expansion enclosure 1: top, port B to expansion enclosure 2: top, port A.  
2. Connect a SAS cable from expansion enclosure 1: bottom, port B to expansion enclosure 2: bottom,  
port A.  
Figure 66. Cable the Expansion Enclosures Together  
1. Expansion enclosure 1  
2. Expansion enclosure 2  
3. Repeat the previous steps to connect additional expansion enclosures to the chain.  
Check the Current Disk Count before Adding Expansion Enclosures  
Use the Dell Storage Client to determine the number of drives that are currently accessible to the Storage  
Center.  
1. Connect to the Storage Center using the Dell Storage Client.  
2. Select the Storage tab.  
3. In the Storage tab navigation pane, select the Disks node.  
4. On the Disks tab, record the number of drives that are accessible by Storage Center.  
Compare this value to the number of drives accessible by the Storage Center after adding expansion  
enclosures to the storage system.  
Add the Expansion Enclosures to the A-Side Chain  
Connect the expansion enclosures to one chain at a time to maintain drive availability.  
1. Disconnect the A-side chain that interconnects the storage controllers.  
Remove the SAS cable that connects storage controller 1: port A to storage controller 2: port B.  
Adding or Removing an Expansion Enclosure  
90  
     
Figure 67. Remove the A-Side Cable from the Storage Controllers  
1. Storage system  
2. Storage controller 1  
3. Storage controller 2  
2. Add the expansion enclosures to the A-side chain.  
a. Connect a SAS cable from storage controller 1: port A to the first expansion enclosure in the  
chain, top EMM, port A.  
b. Connect a SAS cable from storage controller 2: port B to the last expansion enclosure in the  
chain, top EMM, port B.  
Figure 68. Connect the A-Side Cables to the Expansion Enclosures  
1. Storage system  
2. Storage controller 1  
3. Storage controller 2  
5. Expansion enclosure 2  
4. Expansion enclosure 1  
Add the Expansion Enclosures to the B-Side Chain  
Connect the expansion enclosures to one chain at a time to maintain drive availability.  
1. Disconnect the B-side chain that interconnects the storage controllers.  
Remove the SAS cable that connects storage controller 1: port B to storage controller 2: port A.  
Adding or Removing an Expansion Enclosure  
91  
 
Figure 69. Remove the B-Side Cable from the Storage Controllers  
1. Storage system  
2. Storage controller 1  
3. Storage controller 2  
4. Expansion enclosure 1  
5. Expansion enclosure 2  
2. Add the expansion enclosures to the B-side chain.  
a. Connect a SAS cable from storage controller 1: port B to expansion enclosure 2: bottom EMM,  
port B.  
b. Connect a SAS cable from storage controller 2: port A to expansion enclosure 1: bottom EMM,  
port A.  
Adding or Removing an Expansion Enclosure  
92  
Figure 70. Connect the B-Side Cables to the Expansion Enclosures  
1. Storage system  
2. Storage controller 1  
3. Storage controller 2  
4. Expansion enclosure 1  
5. Expansion enclosure 2  
Label the Back-End Cables  
Label the back-end cables that interconnect the storage controllers or label the back-end cables that  
connect the storage system to the expansion enclosures.  
Prerequisites  
Locate the pre-made cable labels provided with the expansion enclosures.  
About this task  
Apply cable labels to both ends of each SAS cable to indicate the chain number and side (A or B).  
Steps  
1. Starting with the top edge of the label, attach the label to the cable near the connector.  
Adding or Removing an Expansion Enclosure  
93  
 
Figure 71. Attach Label to Cable  
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so  
that it does not obscure the text.  
Figure 72. Wrap Label Around Cable  
3. Apply a matching label to the other end of the cable.  
Adding a Single Expansion Enclosure to a Chain Currently  
in Service  
Use caution when adding an expansion enclosure to a live Storage Center system to preserve the  
integrity of the existing data.  
Prerequisites  
Install the expansion enclosure in a rack, but do not connect the expansion enclosure to the storage  
system. For more information, see the Dell Storage Center SC100/SC120 Expansion Enclosure Getting  
Started Guide.  
About this task  
To add an expansion enclosure to an existing chain, connect the expansion enclosure to the end of the  
chain.  
Adding or Removing an Expansion Enclosure  
94  
 
Steps  
1. Connect to the Storage Center using the Dell Storage Client.  
2. Check the disk count of the Storage Center system before adding the expansion enclosure.  
3. Click the Hardware tab and select Enclosures in the Hardware tab navigation pane.  
4. Click Add Enclosure. The Add New Enclosure wizard starts.  
a. Confirm the details of your current install and click Next to validate the existing cabling.  
b. Select the enclosure type and click Next.  
c. If the drives are not installed, install the drives in the expansion enclosure.  
d. Turn on the expansion enclosure. When the drives spin up, make sure that the front panel and  
power status LEDs show normal operation.  
e. Click Next.  
f. Add the expansion enclosure to the A-side chain. Click Next to validate the cabling.  
g. Add the expansion enclosure to the B-side chain. Click Next to validate the cabling.  
h. Click Finish.  
5. To manually manage new unassigned disks:  
a. Click the Storage tab.  
b. In the Storage tab navigation pane, select the Disks node.  
c. Click Manage Unassigned Disks. The Manage Unassigned Disks dialog box appears.  
d. From the Disk Folder drop-down menu, select the disk folder for the unassigned disks.  
e. Select Perform RAID rebalance immediately.  
f. Click OK.  
For more information, see the Dell Enterprise Manager Administrator’s Guide.  
6. Label the new back-end cables.  
Related Links  
Check the Disk Count before Adding an Expansion Enclosure  
Use the Dell Storage Client to determine the number of drives that are currently accessible to the Storage  
Center.  
1. Connect to the Storage Center using the Dell Storage Client.  
2. Select the Storage tab.  
3. In the Storage tab navigation pane, select the Disks node.  
4. On the Disks tab, record the number of drives that are accessible by Storage Center.  
Compare this value to the number of drives accessible by the Storage Center after adding an  
expansion enclosure to the storage system.  
Add an Expansion Enclosure to the A-side Chain  
Connect the expansion enclosure to one chain at a time to maintain drive availability  
1. Turn on the expansion enclosure being added. When the drives spin up, make sure that the front  
panel and power status LEDs show normal operation.  
2. Disconnect the A-side cables (shown in orange) from the storage controllers. The storage system IO  
continues through the B-side cables.  
Adding or Removing an Expansion Enclosure  
95  
   
Disconnect the SAS cable from storage controller 1, port A  
Disconnect the SAS cable from storage controller 2, port B  
Figure 73. Disconnect A-Side Cables from the Storage Controllers  
1. Storage system  
2. Storage controller 1  
4. Expansion enclosure 1  
3. Storage controller 2  
3. Move the cable from expansion enclosure 1: top EMM, port B to the new expansion enclosure (2):  
top EMM, port B.  
4. Use a new SAS cable to connect expansion enclosure 1: top EMM, port B to the new expansion  
enclosure (2): top EMM, port A.  
Figure 74. Connect A-Side Cables to the New Expansion Enclosure  
1. Storage system  
2. Storage controller 1  
3. Storage controller 2  
5. New expansion enclosure (2)  
4. Expansion enclosure 1  
5. Reconnect the A-side cables to the storage controllers:  
a. Reconnect expansion enclosure 1: top EMM, port A to storage controller 1: port A.  
Adding or Removing an Expansion Enclosure  
96  
b. Connect the new expansion enclosure (2): top EMM, port B to storage controller 2: port B.  
Figure 75. Reconnect A-side Cables to the Storage Controllers  
1. Storage system  
2. Storage controller 1  
3. Storage controller 2  
5. New expansion enclosure (2)  
4. Expansion enclosure 1  
Add an Expansion Enclosure to the B-side Chain  
Connect the expansion enclosure to one chain at a time to maintain drive availability.  
1. Disconnect the B-side cables (shown in blue) from the storage controllers. The storage system IO  
continues through the A-side cables.  
Disconnect the SAS cable from storage controller 1: port B.  
Disconnect the SAS cable from storage controller 2, port A.  
Adding or Removing an Expansion Enclosure  
97  
 
Figure 76. Disconnect B-Side Cables from the Storage Controllers  
1. Storage system  
2. Storage controller 1  
4. Expansion enclosure 1  
3. Storage controller 2  
5. New expansion enclosure (2)  
2. Move the cable from expansion enclosure 1: bottom EMM, port B to the new expansion enclosure  
(2): bottom EMM, port B.  
3. Use a new SAS cable to connect expansion enclosure 1: bottom EMM, port B to the new expansion  
enclosure (2): bottom EMM, port A.  
Figure 77. Connect B-Side Cables to the New Expansion Enclosure  
1. Storage system  
2. Storage controller 1  
Adding or Removing an Expansion Enclosure  
98  
3. Storage controller 2  
4. Expansion enclosure 1  
5. New expansion enclosure (2)  
4. Reconnect the A-side cables to the storage controllers:  
a. Reconnect expansion enclosure 1: bottom EMM, port A to storage controller 2: port A.  
b. Connect the new expansion enclosure (2): bottom EMM, port B to storage controller 1: port B.  
Figure 78. Reconnect B-Side Cables to the Storage Controllers  
1. Storage system  
2. Storage controller 1  
3. Storage controller 2  
5. New expansion enclosure (2)  
4. Expansion enclosure 1  
Label the Back-End Cables  
Label the back-end cables that interconnect the storage controllers or label the back-end cables that  
connect the storage system to the expansion enclosures.  
Prerequisites  
Locate the pre-made cable labels provided with the expansion enclosures.  
About this task  
Apply cable labels to both ends of each SAS cable to indicate the chain number and side (A or B).  
Steps  
1. Starting with the top edge of the label, attach the label to the cable near the connector.  
Adding or Removing an Expansion Enclosure  
99  
 
Figure 79. Attach Label to Cable  
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so  
that it does not obscure the text.  
Figure 80. Wrap Label Around Cable  
3. Apply a matching label to the other end of the cable.  
Removing an Expansion Enclosure from a Chain  
Currently in Service  
To remove an expansion enclosure, disconnect the expansion enclosure from one side of the chain at a  
time.  
About this task  
During this process, one side of the chain is disconnected, and the Storage Center directs all IO to other  
the side of the chain, which remains connected.  
CAUTION: Make sure that your data is backed up before removing an expansion enclosure.  
Adding or Removing an Expansion Enclosure  
100  
 
Steps  
1. Connect to the Storage Center using the Dell Storage Client.  
2. Use the Dell Storage Client to release the disks in the expansion enclosure.  
3. Select the expansion enclosure to remove and click Remove Enclosure. The Remove Enclosure  
wizard starts.  
4. Confirm the details of your current install and click Next to validate the cabling.  
5. Locate the expansion enclosure in the rack. Click Next.  
6. Disconnect the A-side chain.  
a. Disconnect the A-side cables that connect the expansion enclosure to the storage system. Click  
Next.  
b. Reconnect the A side cables to exclude the expansion enclosure from the chain. Click Next to  
validate the cabling.  
7. Disconnect the B-side chain.  
a. Disconnect the B-side cables that connect the expansion enclosure to the storage system. Click  
Next  
b. Reconnect the B-side cables to exclude the expansion enclosure from the chain. Click Next to  
validate the cabling.  
8. Click Finish.  
Related Links  
Release the Disks in the Expansion Enclosure  
Use the Dell Storage Client to release the disks in an expansion enclosure before removing the expansion  
enclosure.  
About this task  
Releasing disks causes all of the data to move off the disks.  
NOTE: Do not release disks unless the remaining disks have enough free space for the re-striped  
data.  
Steps  
1. Connect to the Storage Center using the Dell Storage Client.  
2. Click the Hardware tab.  
3. In the Hardware tab navigation pane, expand the enclosure to remove.  
4. Select the Disks node.  
5. Select all of the disks in the expansion enclosure.  
6. Right-click on the selected disks and select Release Disk. The Release Disk dialog box appears.  
7. Select Perform RAID rebalance immediately.  
8. Click OK.  
When all of the drives in the expansion enclosure are in the Unassigned disk folder, the expansion  
enclosure is safe to remove.  
Adding or Removing an Expansion Enclosure  
101  
 
Disconnect the A-Side Chain from the SC100/SC120 Expansion Enclosure  
Disconnect the A-side chain from the expansion enclosure that you want to remove.  
About this task  
CAUTION: To disconnect the A-side chain without a system outage, disconnect the A-side cable  
from storage controller 1 first. Disconnecting a different cable in the chain may disrupt IO to the  
expansion enclosure, resulting in a system outage.  
Steps  
1. Disconnect the A-side cable (shown in orange) between storage controller 1: port A and expansion  
enclosure 1: top EMM, port A.  
2. In the Dell Storage Client, verify that the B-side ports on both storage controllers are Up. The B-side  
chain continues to carry IO while the A-side chain is disconnected.  
3. Remove the A-side cable between expansion enclosure 1: top EMM, port B and expansion enclosure  
2: top EMM, port A.  
Figure 81. Disconnecting the SC100/SC120 Expansion Enclosure from the A-side Chain  
1. Storage system  
2. Storage controller 1  
3. Storage controller 2  
4. Expansion enclosure 1  
5. Expansion enclosure 2  
4. Reconnect the A-side cable to storage controller 1: port A.  
5. Connect the other end of the A-side cable to expansion enclosure 2: top EMM, port A.  
Adding or Removing an Expansion Enclosure  
102  
 
Figure 82. Reconnecting the A-side Chain  
1. Storage system  
2. Storage controller 1  
3. Storage controller 2  
4. Expansion enclosure 1  
5. Expansion enclosure 2  
Disconnect the B-Side Chain from the SC100/SC120 Expansion Enclosure  
Disconnect the B-side chain from the expansion enclosure that you want to remove.  
About this task  
CAUTION: To disconnect the B-side chain without a system outage, disconnect the B-side cable  
from storage controller 2 first. Disconnecting a different cable in the chain may disrupt IO to the  
expansion enclosure, resulting in a system outage.  
Steps  
1. Disconnect the B-side cable (shown in blue) between storage controller 2: port A and expansion  
enclosure 1: bottom EMM, port A.  
2. In the Dell Storage Client, verify that the A-side ports on both storage controllers are Up. The A-side  
chain continues to carry IO while the B-side chain is disconnected.  
3. Remove the B-side cable between expansion enclosure 1: bottom EMM, port B and expansion  
enclosure 2: bottom EMM, port A.  
Adding or Removing an Expansion Enclosure  
103  
 
Figure 83. Disconnecting the SC100/SC120 Expansion Enclosure from the B-side Chain  
1. Storage system  
2. Storage controller 1  
3. Storage controller 2  
4. Expansion enclosure 1  
5. Expansion enclosure 2  
4. Reconnect the B-side cable to storage controller 2: port A.  
5. Connect the other end of the B-side cable from to expansion enclosure 2: bottom EMM, port A.  
The expansion enclosure is now disconnected and can be removed.  
Figure 84. Expansion Enclosure Disconnected  
1. Storage system  
2. Storage controller 1  
Adding or Removing an Expansion Enclosure  
104  
3. Storage controller 2  
5. Expansion enclosure 1  
4. Disconnected expansion enclosure  
Adding or Removing an Expansion Enclosure  
105  
Adding or Removing an Expansion Enclosure  
106  
B
Troubleshooting Storage Center  
This appendix contains troubleshooting steps for common Storage Center issues.  
Troubleshooting Storage Controllers  
Use these steps to troubleshoot storage controllers.  
1. Check the status of the storage controller using the Dell Storage Client.  
2. Check the pins and reseat the storage controller.  
a. Remove the storage controller.  
b. Verify that the pins on the storage system backplane and the storage controller are not bent.  
c. Reinstall the storage controller.  
3. Determine the status of the storage controller link status indicators. If the indicators are not green,  
check the cables.  
a. Shut down the storage controller.  
b. Reseat the cables on the storage controller.  
c. Restart the storage controller.  
d. Recheck the link status indicators. If the link status indicators are not green, replace the cables.  
Troubleshooting Hard Drives  
Use these steps to troubleshoot hard drives.  
1. Check the status of the hard drive using the Dell Storage Client.  
2. Determine the status of the hard drive indicators.  
If the hard drive status indicator blinks amber on 2 seconds / off 1 second, the hard drive has  
failed.  
If the hard drive status indicator is not lit, proceed to the next step.  
3. Check the connectors and reseat the hard drive.  
a. Remove the hard drive.  
b. Check the hard drive and the backplane to ensure that the connectors are not damaged.  
c. Reinstall the hard drive. Make sure the hard drive makes contact with the backplane.  
Troubleshooting Expansion Enclosures  
Use these steps to troubleshoot expansion enclosures.  
1. Check the status of the expansion enclosure using the Dell Storage Client.  
2. If an expansion enclosure and/or drives are missing in the Dell Storage Client, you may need to  
check for and install Storage Center updates to use the expansion enclosure and/or drives.  
3. If an expansion enclosure firmware update fails, check the back-end cabling and ensure that  
redundant connections are used.  
Troubleshooting Storage Center  
107  
       

Soleus Air Kfhhp 12 Id User Manual
Samsung As09a5 A6 Ma User Manual
Ricoh Aficio 8070 User Manual 3
Omega Lift 40500 User Manual
LG LGHP09S User Manual
Lanier 5622 Ag User Manual
D LINK DCS 6818 User Manual
D LINK DCS 6010L User Manual
BLACK DECKER LPP120 User Manual
ASUS T100TAL User Manual