HP Hewlett Packard Server 545740 002 User Manual

HP Integrity NonStop BladeSystem Planning  
Guide  
HP Part Number: 545740-002  
Published: May 2008  
Edition: J06.03 and subsequent J-series RVUs  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Table of Contents  
Table of Contents  
3
Download from Www.Somanuals.com. All Manuals Search And Download.  
4
Table of Contents  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Table of Contents  
5
Download from Www.Somanuals.com. All Manuals Search And Download.  
6
Table of Contents  
Download from Www.Somanuals.com. All Manuals Search And Download.  
List of Figures  
7
Download from Www.Somanuals.com. All Manuals Search And Download.  
8
Download from Www.Somanuals.com. All Manuals Search And Download.  
10  
Download from Www.Somanuals.com. All Manuals Search And Download.  
About This Document  
This guide describes the HP Integrity NonStop™ BladeSystem and provides examples of system  
configurations to assist you in planning for installation of a new HP Integrity NonStop™ NB50000c  
BladeSystem.  
Supported Release Version Updates (RVUs)  
This publication supports J06.03 and all subsequent J-series RVUs until otherwise indicated in  
a replacement publication.  
Intended Audience  
This guide is written for those responsible for planning the installation, configuration, and  
maintenance of a NonStop BladeSystem and the software environment at a particular site.  
Appropriate personnel must have completed HP training courses on system support for NonStop  
BladeSystems.  
New and Changed Information in This Edition  
This is a new manual.  
Document Organization  
Section  
Contents  
This chapter provides an overview of the Integrity  
NonStop NB50000c BladeSystem.  
This chapter outlines topics to consider when planning  
or upgrading the installation site.  
This chapter provides the installation specifications for a  
fully populated NonStop BladeSystem enclosure.  
This chapter describes the guidelines for implementing  
the NonStop BladeSystem.  
This chapter shows recommended locations for hardware  
enclosures in the NonStop BladeSystem.  
This chapter describes the connectivity options, including  
ISEE, for maintenance and support of a NonStop  
BladeSystem.  
This appendix identifies the cables used with the NonStop  
BladeSystem hardware.  
This appendix describes how to use the OSM applications  
to manage a NonStop BladeSystem.  
This appendix describes the default startup characteristics  
for a NonStop BladeSystem.  
Notation Conventions  
General Syntax Notation  
This list summarizes the notation conventions for syntax presentation in this manual.  
Supported Release Version Updates (RVUs)  
11  
Download from Www.Somanuals.com. All Manuals Search And Download.  
             
UPPERCASE LETTERS  
Uppercase letters indicate keywords and reserved words. Type these  
items exactly as shown. Items not enclosed in brackets are required.  
For example:  
MAXATTACH  
Italic Letters  
Italic letters, regardless of font, indicate variable items that you  
supply. Items not enclosed in brackets are required. For example:  
file-name  
Computer Type  
Computer type letters indicate:  
C and Open System Services (OSS) keywords, commands, and  
reserved words. Type these items exactly as shown. Items not  
enclosed in brackets are required. For example:  
Use the cextdecs.hheader file.  
Text displayed by the computer. For example:  
Last Logon: 14 May 2006, 08:02:23  
A listing of computer code. For example  
if (listen(sock, 1) < 0)  
{
perror("Listen Error");  
exit(-1);  
}
Bold Text  
Bold text in an example indicates user input typed at the terminal.  
For example:  
ENTER RUN CODE  
?123  
CODE RECEIVED:  
123.00  
The user must press the Return key after typing the input.  
Brackets enclose optional syntax items. For example:  
[ ] Brackets  
TERM [\system-name.]$terminal-name  
INT[ERRUPTS]  
A group of items enclosed in brackets is a list from which you can  
choose one item or none. The items in the list can be arranged either  
vertically, with aligned brackets on each side of the list, or  
horizontally, enclosed in a pair of brackets and separated by vertical  
lines. For example:  
FC [ num ]  
[ -num ]  
[ text ]  
K [ X | D ] address  
{ } Braces  
A group of items enclosed in braces is a list from which you are  
required to choose one item. The items in the list can be arranged  
either vertically, with aligned braces on each side of the list, or  
horizontally, enclosed in a pair of braces and separated by vertical  
lines. For example:  
LISTOPENS PROCESS { $appl-mgr-name }  
{ $process-name }  
ALLOWSU { ON | OFF }  
12  
Download from Www.Somanuals.com. All Manuals Search And Download.  
| Vertical Line  
… Ellipsis  
A vertical line separates alternatives in a horizontal list that is enclosed  
in brackets or braces. For example:  
INSPECT { OFF | ON | SAVEABEND }  
An ellipsis immediately following a pair of brackets or braces indicates  
that you can repeat the enclosed sequence of syntax items any number  
of times. For example:  
M address [ , new-value ]  
- ] {0|1|2|3|4|5|6|7|8|9}…  
An ellipsis immediately following a single syntax item indicates that  
you can repeat that syntax item any number of times. For example:  
"s-char"  
Punctuation  
Parentheses, commas, semicolons, and other symbols not previously  
described must be typed as shown. For example:  
error := NEXTFILENAME ( file-name ) ;  
LISTOPENS SU $process-name.#su-name  
Quotation marks around a symbol such as a bracket or brace indicate  
the symbol is a required character that you must type as shown. For  
example:  
"[" repetition-constant-list "]"  
Item Spacing  
Spaces shown between items are required unless one of the items is  
a punctuation symbol such as a parenthesis or a comma. For example:  
CALL STEPMOM ( process-id ) ;  
If there is no space between two items, spaces are not permitted. In  
this example, no spaces are permitted between the period and any  
other items:  
$process-name.#su-name  
Line Spacing  
If the syntax of a command is too long to fit on a single line, each  
continuation line is indented three spaces and is separated from the  
preceding line by a blank line. This spacing distinguishes items in a  
continuation line from items in a vertical list of selections. For  
example:  
ALTER [ / OUT file-spec / ] LINE  
[ , attribute-spec ]…  
Publishing History  
Part Number  
Product Version  
Publication Date  
545740-002  
N.A.  
May 2008  
Publishing History  
13  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
HP Encourages Your Comments  
HP encourages your comments concerning this document. We are committed to providing  
documentation that meets your needs. Send any errors found, suggestions for improvement, or  
compliments to:  
Include the document title, part number, and any comment, error found, or suggestion for  
improvement you have concerning this document.  
14  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
1 NonStop BladeSystem Overview  
NOTE: This document describes products and features that are not yet available on systems  
running J-series RVUs. These products and features include:  
CLuster I/O Modules (CLIMs)  
The Cluster I/O Protocols (CIP) subsystem  
Serial attached SCSI (SAS) disk drives and their enclosures  
The Integrity NonStop BladeSystem provides an integrated infrastructure with consolidated  
server, network, storage, power, and management capabilities. The NonStop BladeSystem  
implements the BladeSystem c-Class architecture and is optimized for enterprise data center  
applications. The NonStop NB50000c BladeSystem is introduced as part of the J06.03 RVU.  
NonStop NB50000c BladeSystem  
The NonStop NB50000c BladeSystem combines the NonStop operating system and HP Integrity  
NonStop BL860c Server Blades in a single footprint as part of the “NonStop Multicore Architecture  
The characteristics of an Integrity NonStop NB50000c BladeSystem are:  
Processor  
Intel Itanium  
NSE-M  
Processor model  
Chassis  
c7000 enclosure (one enclosure for 2 to 8 processors; two  
enclosures for 10 to 16 processors)  
Cabinet  
42U, 19 inch rack  
Minimum/maximum main memory per logical processor 8 GB to 48 GB  
Minimum/maximum processors  
2 to 16  
Supported processor configurations  
2, 4, 6, 8, 10, 12, 14, or 16  
Maximum CLuster I/O Modules (CLIMs) in a NonStop 24 CLIMs (IP and Storage)  
Blade System with 16 processors  
Minimum CLIMs  
0 CLIMs (if there are IOAM enclosures)  
2 Storage CLIMs and 2 IP CLIMs (if there are no IOAM  
enclosures)  
Maximum SAS disk enclosures per Storage CLIM pair  
Maximum SAS disk drives per Storage CLIM pair  
4
100  
Maximum Fibre Channel disk modules (FCDMs) through 4 FCDMs daisy-chained with 14 disk drives in each FCDM  
IOAM enclosure  
1
Maximum IOAM enclosures  
6 IOAMs for 10 to 16 processors  
4 IOAMs for 2 to 8 processors  
Enterprise Storage System (ESS) support available through Supported  
Storage CLIMs or IOAM enclosures  
Connection to NonStop ServerNet Clusters  
M8201R Fibre Channel to SCSI router support  
Connection to NonStop S-series I/O  
Supported  
Not supported  
Not supported  
1
When CLIMs are also included in the configuration, the maximum number of IOAMs might be smaller. Check with  
your HP representative to determine your system's maximum for IOAMs.  
NonStop NB50000c BladeSystem  
15  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
NonStop NB50000c BladeSystem with eight server blades in a 42U modular cabinet with the  
optional HP R12000/3 UPS and the HP AF434A extended runtime module (ERM).  
Figure 1-1 Example of a NonStop NB50000c BladeSystem  
NonStop Multicore Architecture (NSMA)  
The NonStop BladeSystem employs the HP NonStop Multicore Architecture (NSMA) to achieve  
full software fault tolerance by running the NonStop operating system on NonStop Server Blades.  
With the NSMA's multiple core microprocessor architecture, a set of cores comprised of instruction  
processing units (IPUs) share the same memory map (except in low-level software). The NSMA  
extends the traditional NonStop logical processor to a multiprocessor and includes:  
No hardware lockstep checking  
Itanium fault detection  
16  
NonStop BladeSystem Overview  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
High-end scalability  
Application virtualization  
Cluster programming transparency  
The NonStop NB50000c BladeSystem can be configured with 2 to 16 processors, communicates  
with other NonStop BladeSystems using Expand, and achieves ServerNet connectivity using a  
ServerNet mezzanine, PCI Express (PCIe) interface card installed in the server blade.  
NonStop NB50000c BladeSystem Hardware  
A large number of enclosure combinations is possible within the modular cabinets of a NonStop  
NB50000c BladeSystem. The applications and purpose of any NonStop BladeSystem determine  
the number and combinations of hardware within the cabinet.  
Standard hardware for a NonStop BladeSystem includes:  
Optional Hardware for a NonStop BladeSystem includes:  
All NonStop BladeSystem components are field-replaceable units that can only be serviced by  
service providers trained by HP.  
Because of the number of possible configurations, you can calculate the total power consumption,  
heat dissipation, and weight of each modular cabinet based on the hardware configuration that  
you order from HP. For site preparation specifications for the modular cabinets and the individual  
enclosures, see Chapter 3 (page 37).  
c7000 Enclosure  
The three-phase c7000 enclosure provides integrated processing, power, and cooling capabilities  
along with connections to the I/O infrastructure. The c7000 enclosure features include:  
Up to 8 NonStop Server Blades per c7000 enclosure – populated in pairs  
Two Onboard Administrator (OA) management modules that provide detection,  
identification, management, and control services for the NonStop BladeSystem.  
The HP Insight Display provides information about the health and operation of the enclosure.  
For more information about the HP Insight Display, which is the visual interface located at  
the bottom front of the OA, see the HP BladeSystem Onboard Administrator User Guide.  
Two Interconnect Ethernet switches that download Halted State Services (HSS) bootcode  
via the maintenance LAN.  
Two ServerNet switches that provide ServerNet connectivity between processors, between  
processors and I/O, and between systems (through connections to cluster switches). There  
are two types of ServerNet switches: Standard I/O or High I/O.  
NonStop NB50000c BladeSystem  
17  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
Six power supplies that implement Dynamic Power Saving Mode. This mode is enabled by  
the OA module, and when enabled, monitors the total power consumed by the c7000  
enclosure in real-time and automatically adjusts to changes in power demand.  
Ten Active Cool fans use the parallel, redundant, scalable, enclosure-based cooling (PARSEC)  
architecture where fresh, cool air flows over all the blades (in the front of the enclosure) and  
all the interconnect modules (in the back of the enclosure).  
Figure 1-2 shows all of these c7000 features, except the HP Insight Display:  
Figure 1-2 c7000 Enclosure Features  
For information about the LEDs associated with the c7000 enclosure components, see the HP  
BladeSystem c7000 Enclosure Setup and Installation Guide.  
18  
NonStop BladeSystem Overview  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
NonStop Server Blade  
The NonStop BL860c Server Blade is a two socket full-height server blade featuring an Intel®  
Itanium® dual-core processor. Each server blade contains a ServerNet interface mezzanine card  
with PCI-Express x4 to PCI-x bridge connections to provide ServerNet fabric connectivity. Other  
features include four integrated Gigabit Ethernet ports for redundant network boot paths and  
12 DIMM slots providing a maximum of 48 GB of memory per server blade.  
IP CLuster I/O Module (CLIM)  
The IP CLIM is a rack-mounted server that is part of some NonStop BladeSystem configurations.  
The IP CLIM functions as a ServerNet Ethernet adapter providing HP standard Gigabit Ethernet  
Network Interface Cards (NICs) to implement one of the IP CLIM configurations (either IP CLIM  
A or IP CLIM B):  
IP CLIM A Configuration (5 Copper Ports)  
Slot 1 contains a NIC that provides four copper Ethernet ports  
Eth01 port (between slots 1 and 2) provides one copper Ethernet port  
Slot 3 contains a ServerNet PCIe interface card, which provides the ServerNet fabric  
connections  
IP CLIM B Configuration (3 Copper/2 Fiber Ports)  
Slot 1 contains a NIC that provides three copper Ethernet ports  
Slots 2 contains a NIC that provides one fiber-optical Ethernet port  
Slot 3 contains a ServerNet interface PCIe card, which provides the ServerNet fabric  
connections  
Slots 4 contains a NIC that provides one fiber-optical Ethernet port  
For an illustration of the IP CLIM slots, see “Ethernet to Networks” (page 70).  
NOTE: Both the IP and Storage CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more  
information about the CIP subsystem, see the Cluster I/O Protocols Configuration and Management  
Manual.  
Storage CLuster I/O Module (CLIM)  
The Storage CLuster I/O Module (CLIM) is part of some NonStop BladeSystem configurations.  
The Storage CLIM is a rack-mounted server and functions as a ServerNet I/O adapter providing:  
Dual ServerNet fabric connections  
A Serial Attached SCSI (SAS) interface for the storage subsystem via a SAS Host Bus Adapter  
(HBA) supporting SAS disk drives and SAS tapes  
A Fibre Channel (FC) interface for ESS and FC tape devices via a customer-ordered FC HBA.  
A Storage CLIM can have 0, 2, or 4 FC ports.  
The Storage CLIM contains 5 PCIe HBA slots with these characteristics:  
Storage CLIM HBA Slot  
Configuration  
Provides  
5
Part of base configuration  
One SAS external and internal connector  
with four SAS links per connector and 3  
Gbps per link is provided by the PCIe 8x slot.  
4
3
Part of base configuration  
Part of base configuration  
One SAS external connector with four SAS  
links per connector and 3 Gbps per link is  
provided by the PCIe 8x slot.  
ServerNet fabric connections via a PCIe 4x  
adapter.  
NonStop NB50000c BladeSystem  
19  
Download from Www.Somanuals.com. All Manuals Search And Download.  
               
Storage CLIM HBA Slot  
Configuration  
Provides  
2
1
Optional customer order  
Optional customer order  
SAS or Fibre Channel  
SAS or Fibre Channel  
Connections to FCDMs are not supported.  
For an illustration of the Storage CLIM HBA slots, see “Storage CLIM Devices” (page 57).  
SAS Disk Enclosure  
The SAS disk enclosure is a rack-mounted disk enclosure and is part of some NonStop  
BladeSystem configurations. The SAS disk enclosure supports up to 25 SAS disk drives, 3Gbps  
SAS protocol, and a dual SAS domain from Storage CLIMs to dual port SAS disk drives. The  
SAS disk enclosure supports connections to SAS disk drives. Connections to FCDMs are not  
supported. For more information about the SAS disk enclosure, see the manual for your SAS  
disk enclosure model (for example, the HP StorageWorks 70 Modular Smart Array Enclosure  
Maintenance and Service Guide).  
The SAS disk enclosure contains:  
25, 2.5” disk drive slots with size options:  
72GB, 15K rpm  
146GB, 10K rpm  
Two independent I/O modules:  
SAS Domain A  
SAS Domain B  
Two fans  
Two power supplies  
IOAM Enclosure  
The IOAM enclosure is part of some NonStop BladeSystem configurations. The IOAM enclosure  
uses Gigabit Ethernet 4-port ServerNet adapters (G4SAs) for networking connectivity and Fibre  
Channel ServerNet adapters (FCSAs) for Fibre Channel connectivity between the system and  
Fibre Channel disk modules (FCDMs), ESS, and Fibre Channel tape.  
Fibre Channel Disk Module (FCDM)  
The Fibre Channel disk module (FCDM) is a rack-mounted enclosure that can only be used with  
NonStop BladeSystems that have IOAM enclosures. The FCDM connects to to an FCSA in an  
IOAM enclosure and contains:  
Up to 14 Fibre Channel arbitrated loop disk drives (enclosure front)  
Environmental monitoring unit (EMU) (enclosure rear)  
Two fans and two power supplies  
Fibre Channel arbitrated loop (FC-AL) modules (enclosure rear)  
You can daisy-chain together up to four FCDMs with 14 drives in each one.  
Maintenance Switch  
The HP ProCurve 2524 maintenance switch provides the communication between the NonStop  
BladeSystem through the Onboard Administrator, c7000 enclosure interconnect Ethernet switch,  
Storage and IP CLIMs, IOAM enclosures, the optional UPS, and the system console running HP  
NonStop Open System Management (OSM). For a general description of the maintenance switch,  
refer to the NonStop NS14000 Planning Guide. Details about the use or implementation of the  
maintenance switch that are specific to a NonStop BladeSystem are presented here.  
20  
NonStop BladeSystem Overview  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                   
The NonStop BladeSystem requires multiple connections to the maintenance switch. The following  
describes the required connections for each hardware component.  
BladeSystem Connections to Maintenance Switch  
One connection per Onboard Administrator on the NonStop BladeSystem  
One connection per Interconnect Ethernet switch on the NonStop BladeSystem  
One connection to the optional UPS module  
One connection for the system console running OSM  
CLIM Connections to Maintenance Switch  
One connection to the iLO port on a CLIM  
One connection to an eth0 port on a CLIM  
IOAM Enclosure Connections to Maintenance Switch  
One connection to each of the two ServerNet switch boards in one I/O adapter module  
(IOAM) enclosure.  
At least two connections to any two Gigabit Ethernet 4-port ServerNet adapters (G4SAs), if  
the NonStop BladeSystem maintenance LAN is implemented through G4SAs.  
System Console  
A system console is a personal computer (PC) purchased from HP that runs maintenance and  
diagnostic software for NonStop BladeSystems. When supplied with a new NonStop BladeSystem,  
system consoles have factory-installed HP and third-party software for managing the system.  
You can install software upgrades from the HP NonStop System Console Installer DVD image.  
Some system console hardware, including the PC system unit, monitor, and keyboard, can be  
mounted in the NonStop BladeSystem's 19-inch rack. Other PCs are installed outside the rack  
and require separate provisions or furniture to hold the PC hardware.  
For more information on the system console, refer to “System Consoles” (page 89).  
UPS and ERM (Optional)  
An uninterruptible power supply (UPS) is optional but recommended where a site UPS is not  
available. HP supports the HP model R12000/3 UPS because it utilizes the power fail support  
provided by the OSM. For information about the requirements for installing a UPS, see  
There are two different versions of the R12000/3 UPS:  
For North America and Japan, the HP AF429A is utilized and uses an IEC309 560P9 (60A)  
input connector with 208V three phase (120V phase-to-neutral)  
For International, the HP AF430A is utilized and uses an IEC309 532P6 (32A) input connector  
with 400V three phase (230V phase-to-neutral).  
Cabinet configurations that include the HP UPS can also include extended runtime modules  
(ERMs). An ERM is a battery module that extends the overall battery-supported system run time.  
NonStop NB50000c BladeSystem  
21  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                     
Up to four ERMs can be used for even longer battery-supported system run time. HP supports  
the HP AF434A ERM.  
WARNING! UPS's and ERMs must be mounted in the lowest portion of the NonStop  
BladeSystem to avoid tipping and stability issues.  
NOTE: The R12000/3 UPS has two output connectors. For I/O racks, only the output connector  
to the rack level PDU is used. For processor racks, one output connector goes to the c7000 chassis  
and the other to the rack PDU. For power feed setup instructions, see “NonStop BladeSystem  
For the R12000/3 UPS power and environmental requirements, refer to Chapter 3 (page 37). For  
planning, installation, and emergency power-off (EPO) instructions, refer to the HP 3 Phase UPS  
User Guide. This guide is available at:  
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01079392/c01079392.pdf  
For other UPS's, refer to the documentation shipped with the UPS.  
Enterprise Storage System (Optional)  
An Enterprise Storage System (ESS) is a collection of magnetic disks, their controllers, and a disk  
cache in one or more standalone cabinets. ESS connects to the NonStop BladeSystem via the  
Storage CLIM's Fibre Channel HBA ports (direct connect), Fibre Channel ports on the IOAM  
enclosures (direct connect), or through a separate storage area network (SAN) using a Fibre  
Channel SAN switch (switched connect). For more information about these connection types,  
see your service provider.  
NOTE: The Fibre Channel SAN switch power cords might not be compatible with the modular  
cabinet PDU. Contact your service provider to order replacement power cords for the SAN switch  
that are compatible with the modular cabinet PDU.  
Cables and switches vary, depending on whether the connection is direct, switched, or a  
combination:  
Connection  
Cables  
Fibre Channel Switches  
Direct connect  
2 Fibre Channel ports on IOAM  
(LC-LC)  
0
2 Fibre Channel HBA ports on  
0
1
Storage CLIM (LC-MMF)  
Switched  
4 Fibre Channel ports (LC-LC)  
1 or more  
1 or more  
4 Fibre Channel HBA ports on  
Storage CLIM (LC-MMF)  
Combination of direct and switched 2 Fibre Channel ports for each direct  
connection  
1
1
4 Fibre Channel ports for each  
switched connection  
1
Customer must order the FC HBA ports on the Storage CLIM.  
Figure 1-3 shows an example of connections between two Storage CLIMs and an ESS via separate  
Fibre Channel switches:  
22  
NonStop BladeSystem Overview  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Figure 1-3 Connections Between Storage CLIMs and ESS  
For fault tolerance, the primary and backup paths to an ESS logical device (LDEV) must go  
through different Fibre Channel switches.  
Some storage area procedures, such as reconfiguration, can cause the affected switches to pause.  
If the pause is long enough, I/O failure occurs on all paths connected to that switch. If both the  
primary and the backup paths are connected to the same switch, the LDEV goes down.  
Refer to the documentation that accompanies the ESS.  
Tape Drive and Interface Hardware (Optional)  
For an overview of tape drives and the interface hardware, see “Fibre Channel Ports to Fibre  
For a list of supported tape devices, ask your service provider to refer to the NonStop BladeSystem  
Hardware Installation Manual.  
Preparation for Other Server Hardware  
This guide provides the specifications only for the NonStop BladeSystem modular cabinets and  
enclosures identified earlier in this section. For site preparation specifications for other HP  
hardware that will be installed with the NonStop BladeSystems, consult your HP account team.  
For site preparation specifications relating to hardware from other manufacturers, refer to the  
documentation for those devices.  
Management Tools for NonStop BladeSystems  
NOTE: For information about changing the default passwords for NonStop BladeSystem  
components and associated software, see “Changing Customer Passwords” (page 71).  
This subsection describes the management tools available on your NonStop BladeSystem:  
Preparation for Other Server Hardware  
23  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
OSM Package  
The HP Open System Management (OSM) product is the required system management tool for  
NonStop BladeSystems. OSM works together with the Onboard Administrator (OA) and  
Integrated Lights Out (iLO) management interfaces to manage c7000 enclosures. A new  
client-based component, the OSM Certificate Tool, facilitates communication between OSM and  
the OA.  
For more information on the OSM package, including a description of the individual applications  
see the OSM Migration and Configuration Guide and the OSM Service Connection User's Guide.  
Onboard Administrator (OA)  
The Onboard Administrator (OA) is the enclosure's management, processor, subsystem, and  
firmware base and supports the c7000 enclosure and NonStop Server Blades. The OA software  
is integrated with OSM and the Integrated Lights Out (iLO) management interface.  
Integrated Lights Out (iLO)  
iLO allows you to perform activities on the NonStop Bladesystem from a remote location and  
provides anytime access to system management information such as hardware health, event logs  
and configuration is available to troubleshoot and maintain the NonStop Server Blades.  
Cluster I/O Protocols (CIP) Subsystem  
The Cluster I/O Protocols (CIP) subsystem provides a configuration and management interface  
for I/O on NonStop BladeSystems. The CIP subsystem has several tools for monitoring and  
managing the subsystem. For more information about these tools and the CIP subsystem, see  
the Cluster I/O Protocols (CIP) Configuration and Management Manual.  
Subsystem Control Facility (SCF) Subsystem  
The Subsystem Control Facility (SCF) also provides monitoring and management of the CIP  
subsystem on the NonStop BladeSystem. See the Cluster I/O Protocols (CIP) Configuration and  
Management Manual for more information about two subsystems with NonStop BladeSystems.  
Component Location and Identification  
This subsection includes these topics:  
24  
NonStop BladeSystem Overview  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                     
Terminology  
These are terms used in locating and describing components:  
Term  
Definition  
Cabinet  
Computer system housing that includes a structure of  
external panels, front and rear doors, internal racking,  
and dual PDUs.  
Rack  
Structure integrated into the cabinet into which  
rackmountable components are assembled.  
The rack uses this naming convention:  
system-name-racknumber  
Rack Offset  
The physical location of components installed in a  
modular cabinet, measured in U values numbered 1 to  
42, with U1 at the bottom of the cabinet. A U is 1.75 inches  
(44 millimeters).  
Group  
A subset of a system that contains one or more modules.  
A group does not necessarily correspond to a single  
physical object, such as an enclosure.  
Module  
A subset of a group that is usually contained in an  
enclosure. A module contains one or more slots (or bays).  
A module can consist of components sharing a common  
interconnect, such as a backplane, or it can be a logical  
grouping of components performing a particular function.  
Slot (or Bay or Position)  
A subset of a module that is the logical or physical location  
of a component within that module.  
Port  
A connector to which a cable can be attached and which  
transmits and receives data.  
Fiber  
Number (one to four) of the fiber pair (LC connector)  
within an MTP-LC fiber cable. An MTP-LC fiber cable  
has a single MTP connector on one end and four LC  
connectors, each containing a pair of fibers, at the other  
end. The MTP connector connects to the ServerNet switch  
in the c7000 enclosure and the LC connectors connect to  
the CLIM  
Group-Module-Slot (GMS)  
A notation method used by hardware and software in  
NonStop systems for organizing and identifying the  
location of certain hardware components.  
Group-Module-Slot-Bay (GMSB)  
Group-Module-Slot-Port (GMSP)  
Group-Module-Slot-Port-Fiber (GMSPF)  
NonStop Server Blade  
A server blade that provides processing and ServerNet  
connections.  
On NonStop BladeSystems, locations of the modular components are identified by:  
Physical location:  
Rack number  
Rack offset  
Logical location: group, module, and slot (GMS) notation as defined by their position on  
the ServerNet rather than the physical location  
OSM uses GMS notation in many places, including the Tree view and Attributes window, and  
it uses rack and offset information to create displays of the server and its components.  
Component Location and Identification  
25  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                     
Rack and Offset Physical Location  
Rack name and rack offset identify the physical location of components in a NonStop BladeSystem.  
The rack name is located on an external label affixed to the rack, which includes the system name  
plus a 2-digit rack number.  
Rack offset is labeled on the rails in each side of the rack. These rails are measured vertically in  
units called U, with one U measuring 1.75 inches (44 millimeters). The rack is 42U with U1 located  
at the bottom and 42U at the top. The rack offset is the lowest number on the rack that the  
component occupies.  
ServerNet Switch Group-Module-Slot Numbering  
Group (100-101):  
Group 100 is the first c7000 processor enclosure containing logical processors 0-7.  
Group 101 is the second c7000 processor enclosure containing logical processors 8-15.  
Module (2-3):  
Module 2 is the X fabric.  
Module 3 is the Y fabric.  
Slot (5 or 7):  
Slot 5 contains the double-wide ServerNet switch for the X fabric.  
Slot 7 contains the double-wide ServerNet switch for the Y fabric.  
NOTE: There are two types of c7000 ServerNet switches: Standard I/O and High I/O. For  
more information and illustrations of the ServerNet switch ports, refer to “I/O Connections  
Port (1-18):  
Ports 1 through 2 support the inter-enclosure links. Port 1 is marked GA. Port 2 is  
marked GB.  
Ports 3 through 8 support the I/O links (IP CLIM, Storage CLIM, and IOAM)  
NOTE: IOAMs must use Ports 4 through 7. These ports support 4-way IOAM links.  
Ports 9 and 10 support the cross links between two ServerNet switches in the same  
enclosure.  
Ports 11 and 12 support the links to a cluster switch. SH on Port 11 stands for short haul.  
LH on Port 12 stands for long haul.  
Ports 13 through 18 are not supported.  
Fiber (1-4)  
These fibers support up to 4 ServerNet links on ports 3-8 of the c7000 enclosure ServerNet  
switch.  
26  
NonStop BladeSystem Overview  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
NonStop Server Blade Group-Module-Slot Numbering  
These tables show the default numbering for the NonStop Server Blades of a NonStop BladeSystem  
when the server blades are powered on and functioning:  
GMS Numbering For the Logical Processors:  
Processor ID  
Group*  
Module  
Slot*  
0
100  
100  
100  
100  
100  
100  
100  
100  
101  
101  
101  
101  
101  
101  
101  
101  
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
9
10  
11  
12  
13  
14  
15  
*In the OSM Service Connection, the term Enclosure is used for the group and the term Bay is used for the slot.  
CLIM Enclosure Group-Module-Slot-Port-Fiber Numbering  
This table shows the valid values for GMSPF numbering for the X1 ServerNet switch connection  
point to a CLIM:  
Group  
Module  
Slots  
Ports  
Fibers  
ServerNet switch 100-101  
2, 3  
5, 7  
3 to 8  
1 - 4  
IOAM Enclosure Group-Module-Slot Numbering  
A NonStop BladeSystem supports IOAM enclosures, identified as group 110 through 115:  
IOAM  
Group  
100  
Module  
Slot  
5
Port  
Fiber  
1 - 4  
1 - 4  
1 - 4  
1 - 4  
110  
2
3
2
3
4 (EA)  
4 (EA)  
6 (EC)  
6 (EC)  
100  
7
111  
100  
5
100  
7
Component Location and Identification  
27  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
IOAM  
Group  
100  
100  
100  
100  
101  
101  
101  
101  
Module  
Slot  
5
Port  
Fiber  
1 - 4  
1 - 4  
1 - 4  
1 - 4  
1 - 4  
1 - 4  
1 - 4  
1 - 4  
112  
2
3
2
3
2
3
2
3
5 (EB)  
5 (EB)  
7 (ED)  
7 (ED)  
4 (EA)  
4 (EA)  
6 (EC)  
6 (EC)  
7
113  
114  
115  
5
7
5
7
5
7
IOAM Group  
X ServerNet  
Module  
Y ServerNet  
Module  
Slot  
Item  
Port  
110 - 115 (See  
preceding table.)  
2
3
1 to 5  
ServerNet  
adapters  
1 - n: where n is  
number of ports  
on adapter  
14  
ServerNet switch 1 - 4  
logic board  
15, 18  
16, 17  
Power supplies  
Fans  
-
-
This illustration shows the slot locations for the IOAM enclosure:  
28  
NonStop BladeSystem Overview  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Fibre Channel Disk Module Group-Module-Slot Numbering  
This table shows the default numbering for the Fibre Channel disk module:  
IOAM Enclosure  
Group  
FCDM  
Shelf  
Module  
Slot  
FCSA F-SACs  
Slot  
Item  
110-115  
2 - X fabric;  
3 - Y fabric  
1 - 5  
1, 2  
1 - 4 if  
0
Fibre Channel  
disk module  
daisy-chained;  
1 if single disk  
enclosure  
1-14  
89  
Disk drive bays  
Transceiver A1  
Transceiver A2  
Transceiver B1  
Transceiver B2  
90  
91  
92  
93  
Left FC-AL  
board  
94  
95  
96  
Right FC-AL  
board  
Left power  
supply  
Right power  
supply  
97  
98  
99  
Left blower  
Right blower  
EMU  
The form of the GMS numbering for a disk in a Fibre Channel disk module is:  
This example shows the disk in bay 03 of the Fibre Channel disk module that connects to the  
FCSA in the IOAM group 111, module 2, slot 1, FSAC 1:  
Component Location and Identification  
29  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
System Installation Document Packet  
To keep track of the hardware configuration, internal and external communications cabling, IP  
addresses, and connect networks, assemble and retain as the systems records an Installation  
Document Packet. This packet can include:  
Technical Document for the Factory-Installed Hardware Configuration  
Each new NonStop BladeSystem includes a document that describes:  
The cabinet included with the system  
Each hardware enclosure installed in the cabinet  
Cabinet U location of the bottom edge of each enclosure  
Each ServerNet cable with:  
Source and destination enclosure, component, and connector  
Cable part number  
Source and destination connection labels  
This document is called a technical document and serves as the physical location and connection  
map for the system.  
Configuration Forms for the ServerNet Adapters and CLIMs  
To add configuration forms for ServerNet adapters or CLIMs to your Installation Document  
Packet, copy the necessary forms from the adapter manuals or the CLuster I/O Module (CLIM)  
Installation and Configuration Guide. Follow any planning instructions in these manuals.  
30  
NonStop BladeSystem Overview  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
2 Site Preparation Guidelines  
This section describes power, environmental, and space considerations for your site.  
Modular Cabinet Power and I/O Cable Entry  
Power and I/O cables can enter the NonStop BladeSystem from either the top or the bottom rear  
of the modular cabinets, depending on how the cabinets are ordered from HP and the routing  
of the AC power feeds at the site. NonStop BladeSystem cabinets can be ordered with the AC  
power cords for the PDUs exiting either:  
Top: Power and I/O cables are routed from above the modular cabinet.  
Bottom: Power and I/O cables are routed from below the modular cabinet  
For information about modular cabinet power and cable options, refer to AC Input Power for  
Emergency Power-Off Switches  
Emergency power off (EPO) switches are required by local codes or other applicable regulations  
when computer equipment contains batteries capable of supplying more than 750 volt-amperes  
(VA) for more that five minutes. Systems that have these batteries also have internal EPO hardware  
for connection to a site EPO switch or relay. In an emergency, activating the EPO switch or relay  
removes power from all electrical equipment in the computer room (except that used for lighting  
and fire-related sensors and alarms).  
EPO Requirement for NonStop BladeSystems  
NonStop BladeSystems without an optional UPS (such as an HP R12000/3 UPS) installed in the  
modular cabinet do not contain batteries capable of supplying more than 750 volt-amperes (VA)  
for more that five minutes, so they do not require connection to a site EPO switch.  
EPO Requirement for HP R12000/3 UPS  
The rack-mounted HP R12000/3, 12kVA UPS can be optionally installed in a modular cabinet,  
contains batteries, and has a remote EPO (REPO) port. For site EPO switches or relays, consult  
your HP site preparation specialist or electrical engineer regarding requirements.  
If an EPO switch or relay connector is required for your site, contact your HP representative or  
refer to the HP 3 Phase UPS User Guide for connector and wiring for the 12kVA model. This guide  
is available at:  
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01079392/c01079392.pdf  
Electrical Power and Grounding Quality  
Proper design and installation of a power distribution system for a NonStop BladeSystem requires  
specialized skills, knowledge, and understanding of appropriate electrical codes and the limitations  
of the power systems for computer and data processing equipment. For power and grounding  
Power Quality  
This equipment is designed to operate reliably over a wide range of voltages and frequencies,  
described in “Enclosure AC Input” (page 45). However, damage can occur if these ranges are  
Modular Cabinet Power and I/O Cable Entry  
31  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                         
exceeded. Severe electrical disturbances can exceed the design specifications of the equipment.  
Common sources of such disturbances are:  
Fluctuations occurring within the facilitys distribution system  
Utility service low-voltage conditions (such as sags or brownouts)  
Wide and rapid variations in input voltage levels  
Wide and rapid variations in input power frequency  
Electrical storms  
Large inductive sources (such as motors and welders)  
Faults in the distribution system wiring (such as loose connections)  
Computer systems can be protected from the sources of many of these electrical disturbances by  
using:  
A dedicated power distribution system  
Power conditioning equipment  
Lightning arresters on power cables to protect equipment against electrical storms  
For steps to take to ensure proper power for the servers, consult with your HP site preparation  
specialist or power engineer.  
Grounding Systems  
The site building must provide a power distribution safety ground/protective earth for each AC  
service entrance to all NonStop BladeSystem equipment. This safety grounding system must  
comply with local codes and any other applicable regulations for the installation locale.  
For proper grounding/protective earth connection, consult with your HP site preparation specialist  
or power engineer.  
Power Consumption  
In a NonStop BladeSystem, the power consumption and inrush currents per connection can vary  
because of the unique combination of enclosures housed in the modular cabinet. Thus, the total  
power consumption for the hardware installed in the cabinet should be calculated as described  
Uninterruptible Power Supply (UPS)  
Modular cabinets do not have built-in batteries to provide power during power failures. To  
support system operation and ride-through support during a power failure, NonStop  
BladeSystems require either an optional UPS (HP supports the HP model R12000/3 UPS) installed  
in each modular cabinet or a site UPS to support system operation through a power failure. This  
system operation support can include a planned orderly shutdown at a predetermined time in  
the event of an extended power failure. A timely and orderly shutdown prevents an uncontrolled  
and asymmetric shutdown of the system resources from depleted UPS batteries.  
OSM provides this ride-through support during a power failure. When OSM detects a power  
failure, it triggers a ride-through timer. To set this timer, you must configure the ride-through  
time in SCF. For this information, refer to the SCF Reference Manual for the Kernel Subsystem. If  
AC power is not restored before the configured ride-through time period ends, OSM initiates  
an orderly shutdown of I/O operations and processors. For additional information, see AC  
32  
Site Preparation Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
               
NOTE: Retrofitting a system in the field with a UPS and ERMs will likely require moving all  
installed enclosures in the rack to provide space for the new hardware. One or more of the  
enclosures that formerly resided in the rack might be displaced and therefore have to be installed  
in another rack that would also need a UPS and ERMs installed. Additionally, lifting equipment  
might be required to lift heavy enclosures to their new location.  
For information and specifications on the R12000/3 UPS, see Chapter 3 (page 37) and refer to  
the HP 3 Phase UPS User Guide. This guide is available at:  
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01079392/c01079392.pdf  
If you install a UPS other than the HP model R12000/3 UPS in each modular cabinet of a NonStop  
BladeSystem, these requirements must be met to insure the system can survive a total AC power  
fail:  
The UPS output voltage can support the HP PDU input voltage requirements.  
The UPS phase output matches the PDU phase input. For NonStop BladeSystems, 3-phase  
output UPSs and 3-phase input HP PDUs are supported. For details, refer to Chapter 3  
The UPS output can support the targeted system in the event of an AC power failure.  
Calculate each cabinet load to insure the UPS can support a proper ride-through time in the  
event of a total AC power failure. For more information, refer to “Enclosure Power Loads”  
NOTE: A UPS other than the HP model R12000/3 UPS will not be able to utilize the power  
fail support of the Configure a Power Source as UPS OSM action.  
If your applications require a UPS that supports the entire system or even a UPS or motor  
generator for all computer and support equipment in the site, you must plan the sites electrical  
infrastructure accordingly.  
Cooling and Humidity Control  
Do not rely on an intuitive approach to design cooling or to simply achieve an energy  
balance—that is, summing up to the total power dissipation from all the hardware and sizing a  
comparable air conditioning capacity. Todays high-performance NonStop BladeSystems use  
semiconductors that integrate multiple functions on a single chip with very high power densities.  
These chips, plus high-power-density mass storage and power supplies, are mounted in ultra-thin  
system and storage enclosures, and then deployed into computer racks in large numbers. This  
higher concentration of devices results in localized heat, which increases the potential for hot  
spots that can damage the equipment.  
Additionally, variables in the installation site layout can adversely affect air flows and create hot  
spots by allowing hot and cool air streams to mix. Studies have shown that above 70°F (20°C),  
every increase of 18°F (10°C) reduces long-term electronics reliability by 50%.  
Cooling airflow through each enclosure in the NonStop BladeSystem is front-to-back. Because  
of high heat densities and hot spots, an accurate assessment of air flow around and through the  
system equipment and specialized cooling design is essential for reliable system operation. For  
an airflow assessment, consult with your HP cooling consultant or your heating, ventilation, and  
air conditioning (HVAC) engineer.  
Cooling and Humidity Control  
33  
Download from Www.Somanuals.com. All Manuals Search And Download.  
             
NOTE: Failure of site cooling with the NonStop BladeSystem continuing to run can cause rapid  
heat buildup and excessive temperatures within the hardware. Excessive internal temperatures  
can result in full or partial system shutdown. Ensure that the sites cooling system remains fully  
operational when the NonStop BladeSystem is running.  
Because each modular cabinet houses a unique combination of enclosures, use the “Heat  
Dissipation Specifications and Worksheet” (page 50) to calculate the total heat dissipation for  
the hardware installed in each cabinet. For air temperature levels at the site, refer to “Operating  
Weight  
Because modular cabinets for NonStop BladeSystems house a unique combination of enclosures,  
total weight must be calculated based on what is in the specific cabinet, as described in “Modular  
Flooring  
NonStop BladeSystems can be installed either on the sites floor with the cables entering from  
above the equipment or on raised flooring with power and I/O cables entering from underneath.  
Because cooling airflow through each enclosure in the modular cabinets is front-to-back, raised  
flooring is not required for system cooling.  
The site floor structure and any raised flooring (if used) must be able to support the total weight  
of the installed computer system as well as the weight of the individual modular cabinets and  
their enclosures as they are moved into position. To determine the total weight of each modular  
cabinet with its installed enclosures, refer to “Modular Cabinet and Enclosure Weights With  
For your sites floor system, consult with your HP site preparation specialist or an appropriate  
floor system engineer. If raised flooring is to be used, the design of the NonStop BladeSystem  
modular cabinet is optimized for placement on 24-inch floor panels.  
Dust and Pollution Control  
NonStop BladeSystems do not have air filters. Any computer equipment can be adversely affected  
by dust and microscopic particles in the site environment. Airborne dust can blanket electronic  
components on printed circuit boards, inhibiting cooling airflow and causing premature failure  
from excess heat, humidity, or both. Metallically conductive particles can short circuit electronic  
components. Tape drives and some other mechanical devices can experience failures resulting  
from airborne abrasive particles.  
For recommendations to keep the site as free of dust and pollution as possible, consult with your  
heating, ventilation, and air conditioning (HVAC) engineer or your HP site preparation specialist.  
Zinc Particulates  
Over time, fine whiskers of pure metal can form on electroplated zinc, cadmium, or tin surfaces  
such as aged raised flooring panels and supports. If these whiskers are disturbed, they can break  
off and become airborne, possibly causing computer failures or operational interruptions. This  
metallic particulate contamination is a relatively rare but possible threat. Kits are available to  
test for metallic particulate contamination, or you can request that your site preparation specialist  
or HVAC engineer test the site for contamination before installing any electronic equipment.  
Space for Receiving and Unpacking the System  
Identify areas that are large enough to receive and to unpack the system from its shipping cartons  
and pallets. Be sure to allow adequate space to remove the system equipment from the shipping  
34  
Site Preparation Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                           
pallets using supplied ramps. Also be sure adequate personnel are present to remove each cabinet  
from its shipping pallet and to safely move it to the installation site.  
WARNING! A fully populated cabinet is unstable when moving down the unloading ramp  
from its shipping pallet. Arrange for enough personnel to stabilize each cabinet during removal  
from the pallet and to prevent the cabinet from falling. A falling cabinet can cause serious or  
fatal personal injury.  
Ensure sufficient pathways and clearances for moving the NonStop BladeSystem equipment  
safely from the receiving and unpacking areas to the installation site. Verify that door and hallway  
width and height as well as floor and elevator loading will accommodate not only the system  
equipment but also all required personnel and lifting or moving devices. If necessary, enlarge  
or remove any obstructing doorway or wall.  
All modular cabinets have small casters to facilitate moving them on hard flooring from the  
unpacking area to the site. Because of these small casters, rolling modular cabinets along carpeted  
or tiled pathways might be difficult. If necessary, plan for a temporary hard floor covering in  
affected pathways for easier movement of the equipment.  
For physical dimensions of the NonStop BladeSystem equipment, refer to “Dimensions and  
Operational Space  
When planning the layout of the NonStop BladeSystem site, use the equipment dimensions, door  
swing, and service clearances listed in “Dimensions and Weights” (page 47). Because location  
of the lighting fixtures and electrical outlets affects servicing operations, consider an equipment  
layout that takes advantage of existing lighting and electrical outlets.  
Also consider the location and orientation of current or future air conditioning ducts and airflow  
direction and eliminate any obstructions to equipment intake or exhaust air flow. Refer to “Cooling  
Space planning should also include the possible addition of equipment or other changes in space  
requirements. Depending on the current or future equipment installed at your site, layout plans  
can also include provisions for:  
Channels or fixtures used for routing data cables and power cables  
Access to air conditioning ducts, filters, lighting, and electrical power hardware  
Communications cables, patch panels, and switch equipment  
Power conditioning equipment  
Storage area or cabinets for supplies, media, and spare parts  
Operational Space  
35  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
36  
Download from Www.Somanuals.com. All Manuals Search And Download.  
3 System Installation Specifications  
This section provides specifications necessary for system installation planning.  
NOTE: All specifications provided in this section assume that each enclosure in the modular  
cabinet is fully populated. The maximum current for each AC service depends on the number  
and type of enclosures installed in the modular cabinet. Power, weight, and heat loads are less  
when enclosures are not fully populated; for example, a Fibre Channel disk module with fewer  
disks.  
Modular Cabinets  
The modular cabinet is a EIA standard 19-inch, 42U rack for mounting modular components.  
The modular cabinet comes equipped with front and rear doors and includes a rear extension  
that makes it deeper than some industry-standard racks. The “Power Distribution Units (PDUs)”  
(page 42) are mounted along the rear extension without occupying any U-space in the cabinet  
and are oriented inward, facing the components within the rack.  
NonStop BladeSystem Power Distribution  
There are two power configurations for NonStop BladeSystems:  
North America/Japan (NA/JPN): requires 208V three phase (120V phase to neutral) and  
loads wired phase-to-phase  
International (INTL): requires 400V three phase with loads wired phase to neutral (230V)  
Both power configurations require 200V to 240V distribution and careful attention to phase load  
balancing. For more information, see “Phase Load Balancing” (page 45).  
The NonStop BladeSystem's three-phase, c7000 enclosure contains an AC Input Module that  
provides 2N redundant power distribution for the power configurations. This power module  
comes with a pair of power cords that provide direct AC power feeds to the c7000 enclosure:  
Modular Cabinets  
37  
Download from Www.Somanuals.com. All Manuals Search And Download.  
             
One c7000 power feed is from the main power source and the other is from a backup UPS grid.  
For the R12000/3 UPS installed in a rack, the backup power source for the c7000 is one of the  
dedicated three phase outputs. There is no power sharing between the c7000 and the rack PDU  
feed. Two three-phase rack PDUs power all the other components except the c7000 in the NonStop  
BladeSystem. One PDU is connected to the main power input grid: the other to the backup grid.  
For racks with integral UPS, this is one of the dedicated three phase outputs of the UPS. For  
There are two different versions of the rack level PDU. For more details, see “Power Distribution  
Power Feed Setup for the NonStop BladeSystem  
Power set up depends on your power configuration type:  
North America/Japan Power Setup With Rack-Mounted UPS  
To setup the power feed connections as shown in Figure 3-1:  
1. Connect one 3-phase 60A power feed to the rack-mounted UPS IEC309 560P9 (60A, 5 wire/4  
pole) input connector.  
2. Connect one 3-phase 30A power feed to the AF504A PDU NEMA L15-30P (30A, 4 wire/3  
pole) input connector.  
3. Connect one 3-phase 30A power feed to the c7000 enclosure's NEMA L15-30P (30A, 4 wire/3  
pole) input connector.  
38  
System Installation Specifications  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
Figure 3-1 North America/Japan 3-Phase Power Setup With Rack-Mounted UPS  
North America/Japan Power Setup Without Rack-Mounted UPS  
To setup the power feed connections as shown in Figure 3-2:  
1. Connect two 3-phase 30A power feeds to the two AF504A PDU NEMA L15-30P (30A, 4  
wire/3 pole) input connectors.  
2. Connect two 3-phase 30A power feeds to the two NEMA L15-30P (30A, 4 wire/3 pole) input  
connectors within the c7000 enclosure.  
NonStop BladeSystem Power Distribution  
39  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Figure 3-2 North America/Japan Power Setup  
International Power Setup With Rack-Mounted UPS  
To setup the power feed connections as shown in Figure 3-3 (page 41):  
1. Connect one 3-phase 32A power feed to the rack-mounted UPS IEC309 532P6 (32A, 5 wire/4  
pole) input connector.  
2. Connect one 3-phase 16A power feed to the AF508A PDU IEC309 516P6 (16A, 5 wire/4 pole)  
input connector.  
3. Connect one 3-phase 16A power feed to the c7000 enclosure's IEC309 516P6 (16A, 5 wire/4  
pole) input connector.  
40  
System Installation Specifications  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Figure 3-3 International 3-Phase Power Setup With UPS  
International Power Setup Without Rack-Mounted UPS  
To setup the power feed connections as shown in Figure 3-4:  
1. Connect two 3-phase 16A power feeds to the two AF508A PDU IEC309 516P6 (16A, 5 wire/4  
pole) input connectors.  
2. Connect two 3-phase 16A power feeds to the two IEC309 516P6 (16A, 5 wire/4 pole) input  
connectors within the c7000 enclosure.  
NonStop BladeSystem Power Distribution  
41  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Figure 3-4 International Power Setup Without Rack-Mounted UPS  
Power Distribution Units (PDUs)  
Two power distribution units (PDUs) are installed to provide redundant power outlets for the  
components mounted in the modular cabinet. The PDUs are oriented inward, facing the  
components within the rack. Each PDU is 60 inches long and has 39 AC receptacles, three circuit  
breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the  
modular cabinet at either the top or bottom rear corners of the cabinet, depending on the site's  
power feed needs.  
For information about specific PDU input and output characteristics for PDUs factory-installed  
in modular cabinets, refer to AC Input Power for Modular Cabinets” (page 44).  
Each PDU in a modular cabinet has:  
36 AC receptacles per PDU (12 per segment) - IEC 320 C13 10A receptacle type  
3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type  
3 circuit-breakers  
These PDU options are available to receive power from the site AC power source:  
208 V AC, three-phase delta for North America and Japan  
400 V AC, three-phase wye for International  
Each PDU distributes site three-phase power to 39 single-phase 200 to 240 V AC outlets for  
connecting the power cords from the components mounted in the modular cabinet.  
The AC power feed cables for the PDUs are mounted to exit the modular cabinet at either the  
top or bottom rear corners of the cabinet depending on what is ordered for the site's power feed.  
42  
System Installation Specifications  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
Figure 3-5 shows the power feed cables on PDUs with AC feed at the bottom of the cabinet and  
the AC power outlets along the PDU. These power outlets face in toward the components in the  
cabinet.  
Figure 3-5 Bottom AC Power Feed  
Figure 3-6 shows the power feed cables on PDUs with AC feed at the top of the cabinet:  
Figure 3-6 Top AC Power Feed  
Power Distribution Units (PDUs)  
43  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
AC Input Power for Modular Cabinets  
This subsection provides information about AC input power for modular cabinets and covers  
these topics:  
Power can enter the NonStop BladeSystem from either the top or the bottom rear of the modular  
cabinets, depending on how the cabinets are ordered from HP and the AC power feeds are routed  
at the site. NonStop BladeSystem cabinets can be ordered with the AC power cords for the PDU  
installed either:  
Top: Power and I/O cables are routed from above the modular cabinet.  
Bottom: Power and I/O cables are routed from below the modular cabinet  
For information on the modular cabinets, refer to “Modular Cabinets” (page 37). For information  
North America and Japan: 208 V AC PDU Power  
The cabinet includes two power distribution units (PDU). The PDU power characteristics are:  
PDU input characteristics  
208 V AC, 3-phase delta, 24A RMS, 4-wire  
50/60Hz  
NEMA L15-30 input plug  
6.5 feet (2 m) attached power cord  
PDU output characteristics  
3 circuit-breaker-protected 13.86A load segments  
36 AC receptacles per PDU (12 per segment) - IEC 320  
C13 10A receptacle type  
3 AC receptacles per PDU (1 per segment) - IEC 320  
C19 16A receptacle type  
International: 400 V AC PDU Power  
The cabinet includes two power distribution units (PDU). The PDU power characteristics are:  
PDU input characteristics  
380 to 415 V AC, 3-phase Wye, 16A RMS, 5-wire  
50/60Hz  
IEC309 5-pin, 16A input plug  
6.5 feet (2 m) attached harmonized power cord  
PDU output characteristics  
3 circuit-breaker-protected 16A load segments  
36 AC receptacles per PDU (12 per segment) - IEC 320  
C13 10A receptacle type  
3 AC receptacles per PDU (1 per segment) - IEC 320  
C19 16A receptacle type  
Branch Circuits and Circuit Breakers  
Modular cabinets for the NonStop BladeSystem contain two PDUs.  
44  
System Installation Specifications  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                         
In cabinets without the optional rack-mounted UPS, each of the two PDUs requires a separate  
branch circuit of these ratings:  
Region  
Volts  
208  
Amps (see following “CAUTION”)  
North America and Japan  
International  
30  
1
400  
16  
1
Category D circuit breaker is required.  
CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within  
the system do not exceed the capacity of the branch circuit according to applicable electrical  
codes and regulations.  
Branch circuit requirements vary by the input voltage and the local codes and applicable  
regulations regarding maximum circuit and total distribution loading.  
Select circuit breaker ratings according to local codes and any applicable regulations for the  
circuit capacity. Note that circuit breaker ratings vary if your system includes the optional  
rack-mounted HP Model R12000/3 Integrated UPS.  
These ratings apply to systems with the optional rack-mounted HP Model R12000/3 Integrated  
UPS:  
1
Version  
Operating Voltage  
Settings  
Power Out (VA/Watts) Input Plug  
UPS Input Rating  
North America and 208  
Japan  
12000  
IEC-309 60 Amp  
Dedicated 36 Amp  
International  
230  
12000  
IEC-309 32 Amp  
Dedicated 24 Amp  
1
The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS.  
For further information and specifications on the R12000/3 UPS (12kVA model), refer to the HP  
3 Phase UPS User Guide for the 12kVA model. This guide is available at:  
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01079392/c01079392.pdf  
Enclosure AC Input  
Enclosures (c7000, IP CLIM, IOAM enclosure, and so forth) require:  
Specification  
Value  
Nominal input voltage  
Voltage range*  
200/208/220/230/240 V AC RMS  
180-264 V AC  
Nominal line frequency  
Frequency ranges  
50 or 60 Hz  
47-53 Hz or 57-63 Hz  
Number of phases (c7000 enclosure only)  
Number of phases (all other components)  
3
1
Phase Load Balancing  
Each PDU is wired such that there are three load segments with groups of outlets alternating  
between load segments, going up and down the PDU. Refer to “Power Distribution Units (PDUs)”  
(page 42). Factory-installed enclosures, other than the c7000, are connected to the PDUs on  
alternating load segments to facilitate phase load balancing. The c7000 has its own three-phase  
AC Input Power for Modular Cabinets  
45  
Download from Www.Somanuals.com. All Manuals Search And Download.  
             
input, with each phase (International) or pairs of phases (North America/Japan) associated with  
one of the c7000 power supplies. When the c7000 is operating in Dynamic Power Saving Mode,  
the minimum number of power supplies are enabled to redundantly power the enclosure. This  
mode increases power supply efficiency, but leaves the phases or phase pairs associated with  
the disabled power supplies unloaded. For multiple-cabinet installations, in order to balance  
phase loads when Dynamic Power Saving Mode is enabled, HP recommends rotating the phases  
from one cabinet to the next. For example, if the first cabinet is wired A-B-C, the next cabinet  
should be wired B-C-A, and the next C-A-B, and so on.  
Enclosure Power Loads  
The total power and current load for a modular cabinet depends on the number and type of  
enclosures installed in it. Therefore, the total load is the sum of the loads for all enclosures  
installed. For examples of calculating the power and current load for various enclosure  
In normal operation, the AC power is split equally between the two PDUs in the modular cabinet.  
However, if one of the two AC power feeds fails, the remaining AC power feed and PDU must  
carry the power for all enclosures in that cabinet.  
Power and current specifications for each type of enclosure are:  
Enclosure Type  
AC Power Lines  
Apparent Power  
(volt-amps  
Apparent Power (volt-amps measured Peak Inrush  
1
per Enclosure  
on single AC line with both lines  
Current (amps)  
2
measured on  
powered)  
single AC line with  
one line powered)  
Per line:  
Total:  
3
c7000  
2
2
2
4300  
320  
2200  
185  
4400  
370  
210  
15  
IP CLIM  
Storage CLIM  
320  
185  
370  
15  
SAS disk  
enclosure  
2
2
2
260  
262  
290  
140  
163  
174  
280  
326  
348  
5
IOAM enclosure  
Fibre Channel  
30  
14  
4
disk module  
Rack-mounted  
system console  
1
1
176  
28  
-
-
-
-
27  
2
Rack-mounted  
keyboard and  
monitor  
Maintenance  
switch  
1
44  
-
-
4
5
(Ethernet)  
1
2
See “Power Feed Setup for the NonStop BladeSystem” (page 38) for c7000 enclosure power feed requirements.  
Total apparent power is the sum of the two AC power lines feeding the enclosure. Electrical load is shared equally  
between the two lines.  
3
Decrease the apparent power VA specification by 508VA for each empty Nonstop server blade slot. For example, a  
c7000 that only has four NonStop Server Blades installed would be rated 4400 VA minus (4 server blades x 508 VA)  
= 2370 VA apparent power.  
4
5
Measured with 14 disk drives installed and active.  
Maintenance switch has only one AC plug.  
46  
System Installation Specifications  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Dimensions and Weights  
This subsection provides information about the dimensions and weights for modular cabinets  
and enclosures installed in a modular cabinet and covers these topics:  
Plan View of the 42U Modular Cabinet  
Service Clearances for the Modular Cabinets  
Aisles: 6 feet (182.9 centimeters)  
Front: 3 feet (91.4 centimeters)  
Rear: 3 feet (91.4 centimeters)  
Unit Sizes  
Enclosure Type  
Height (U)  
Modular cabinet  
c7000 enclosure  
42  
10  
2
IP CLIM  
Storage CLIM  
2
SAS disk enclosure  
IOAM enclosure  
Fibre Channel disk module (FCDM)  
2
11  
3
Dimensions and Weights  
47  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                 
Enclosure Type  
Height (U)  
Maintenance switch (Ethernet)  
R12000/3 UPS  
1
6
3
2
Extended runtime module  
Rack-mounted system console  
42U Modular Cabinet Physical Specifications  
Item  
Height  
in.  
Width  
in.  
Depth  
in.  
Weight  
cm  
cm  
cm  
Modular  
cabinet  
78.7  
199.9  
24.0  
60.96  
46.7  
118.6  
Depends on  
the  
enclosures  
installed.  
Refer to  
Rack  
78.5  
199.4  
199.4  
199.4  
23.62  
23.5  
11.0  
60.0  
59.7  
27.9  
42.5  
3.2  
108.0  
8.1  
Front door 78.5  
Left-rear  
door  
78.5  
78.5  
86.5  
1.0  
2.5  
Right-rear  
door  
199.4  
12.0  
30.5  
1.0  
2.5  
Shipping  
(palletized)  
219.71  
35.75  
90.80  
54.25  
137.80  
Enclosure Dimensions  
EnclosureType Height  
in  
Width  
in  
Depth  
cm  
cm  
in  
cm  
c7000  
17.4  
44.1  
17.5  
44.4  
44.5  
44.8  
48.3  
50.5  
44.2  
32  
81.2  
enclosure  
IP or Storage 3.3  
CLIM  
8.5  
17.5  
17.6  
19.0  
19.9  
17.4  
26  
66  
SAS disk  
enclosure  
3.4  
8.8  
23.2  
27.0  
17.6  
8.0  
59  
IOAM  
enclosure  
19.25  
48.9  
13.1  
4.6  
68.6  
44.8  
20.3  
Fibre Channel 5.2  
disk module  
Maintenance 1.8  
switch  
(Ethernet)  
Rack-mounted 1.7  
system console  
with keyboard  
and display  
4.3  
16.8  
42.7  
24.0  
60.9  
R12000/3 UPS 10.3  
26.1  
13.2  
26  
26  
66  
66  
14.4  
17.2  
36.5  
43.6  
Extended  
runtime  
5.2  
module (ERM)  
48  
System Installation Specifications  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
Modular Cabinet and Enclosure Weights With Worksheet  
The total weight of each modular cabinet is the sum the weights of the cabinet plus each enclosure  
installed in it. Use this worksheet to determine the total weight:  
EnclosureType  
Number of  
Enclosures  
Weight  
lbs  
Total  
lbs  
kg  
kg  
Modular  
1
cabinet  
42U  
303  
137  
c7000 Enclosure  
IOAM enclosure  
480  
235  
78  
218  
106  
35  
Fibre Channel  
disk module  
(FCDM)  
IP or Storage  
CLIM  
60  
48  
6
27  
25  
3
SAS disk  
enclosure  
Maintenance  
switch (Ethernet)  
Rack-mounted  
system console  
with keyboard  
and display  
34  
15  
R12000/3 UPS  
307 (with  
batteries)  
139.2 (with  
batteries)  
135 (without  
batteries)  
59.8 (without  
batteries)  
Extended  
runtime module  
(ERM)  
170  
77  
Total  
--  
--  
1
Modular cabinet weight includes the PDUs and their associated wiring and receptacles.  
For examples of calculating the weight for various enclosure combinations, refer to “Calculating  
Modular Cabinet Stability  
Cabinet stabilizers are required when you have less than four cabinets bayed together.  
NOTE: Cabinet stability is of special concern when equipment is routinely installed, removed,  
or accessed within the cabinet. Stability is addressed through the use of leveling feet, baying kits,  
fixed stabilizers, and/or ballast.  
For information about best practices for cabinets, your service provider can consult:  
HP 10000 G2 Series Rack User Guide  
Best practices for HP 10000 Series and HP 10000 G2 Series Racks  
Modular Cabinet Stability  
49  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
Environmental Specifications  
This subsection provides information about environmental specifications and covers these topics:  
Heat Dissipation Specifications and Worksheet  
Enclosure Type  
Number Installed  
Unit Heat (BTU/hour Unit Heat (BTU/hour Total (BTU/hour)  
with single AC line  
powered)  
with both AC lines  
powered)  
1
c7000  
12400  
1070  
869  
13700  
1236  
936  
IP or Storage CLIM  
SAS disk enclosure  
2
IOAM  
893  
1112  
1187  
Fibre Channel disk  
module (FCDM)  
990  
3
Maintenance switch  
150  
696  
-
-
4
(Ethernet)  
Rack-mounted system  
console with  
keyboard and display  
1
Decrease the BTU/hour specification by 1730 BTU/hour for each empty NonStop Server Blade slot. For example, a  
c7000 that only has four NonStop Server Blades installed would be rated 13700 BTU/hour minus (4 server blades x  
1730 BTU/hour) = 6780 BTU/hour.  
2
3
4
Measured with 10 Fibre Channel ServerNet adapters installed and active.  
Measured with 14 disk drives installed and active.  
Maintenance switch has only one plug.  
Operating Temperature, Humidity, and Altitude  
1
1
Specification  
Operating Range  
Recommended Range  
Maximum Rate of Change  
per Hour  
Temperature (IOAM,  
rack-mounted system  
console, and maintenance  
switch)  
41° to 95° F (5° to 35° C)  
68° to 72° F  
(20° to 25° C)  
9° F (5° C) Repetitive  
36° F (20° C) Nonrepetitive  
Temperature (c7000, CLIMs, 50° to 95° F  
-
0.6° F (1° C) Repetitive  
SAS disk enclosure, and  
(10° to 35° C)  
1.6° F (3° C) Nonrepetitive  
Fibre Channel disk module)  
Humidity (all except c7000 15% to 80%, noncondensing 40% to 50%, noncondensing 6%, noncondensing  
enclosure)  
Humidity (c7000 enclosure) 20% to 80%, noncondensing 40% to 55%, noncondensing 6%, noncondensing  
2
Altitude  
0 to 10,000 feet (0 to 3,048  
meters)  
-
-
50  
System Installation Specifications  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
1
2
Operating and recommended ranges refer to the ambient air temperature and humidity measured 19.7 in. (50 cm)  
from the front of the air intake cooling vents.  
For each 1000 feet (305 m) increase in altitude above 10,000 feet (up to a maximum of 15,000 feet), subtract 1.5× F  
(0.83× C) from the upper limit of the operating and recommended temperature ranges.  
Nonoperating Temperature, Humidity, and Altitude  
Temperature:  
Up to 72-hour storage: - 40° to 150° F (-40° to 66° C)  
Up to 6-month storage: -20° to 131° F (-29° to 55° C)  
Reasonable rate of change with noncondensing relative humidity during the transition  
from warm to cold  
Relative humidity: 10% to 80%, noncondensing  
Altitude: 0 to 40,000 feet (0 to 12,192 meters)  
Cooling Airflow Direction  
NOTE:  
Because the front door of the enclosure must be adequately ventilated to allow air to  
enter the enclosure and the rear door must be adequately ventilated to allow air to escape, do  
not block the ventilation apertures of a NonStop BladeSystem.  
Each NonStop BladeSystem includes 10 Active Cool fans that provide high-volume, high pressure  
airflow at even the slowest fan speeds. Air flow for each NonStop BladeSystem enters through  
a slot in the front of the c7000 enclosure and is pulled into the interconnect bays. Ducts allow the  
air to move from the front to the rear of the enclosure where it is pulled into the interconnects  
and the center plenum. The air is then exhausted out the rear of the enclosure.  
Blanking Panels  
If the NonStop BladeSystem is not completely filled with components, the gaps between these  
components can cause adverse changes in the airflow, negatively impacting cooling within the  
rack. You must cover any gaps with blanking panels. In high density environments, air gaps in  
the enclosure and between adjacent enclosures should be sealed to prevent recirculation of hot-air  
from the rear of the enclosure to the front.  
Typical Acoustic Noise Emissions  
70 dB(A) (sound pressure level at operator position)  
Tested Electrostatic Immunity  
Contact discharge: 8 KV  
Air discharge: 20 KV  
Calculating Specifications for Enclosure Combinations  
Power and thermal calculations assume that each enclosure in the cabinet is fully populated.  
The power and heat load is less when enclosures are not fully populated, such as a Fibre Channel  
disk module with fewer disk drives.  
AC current calculations assume that one PDU delivers all power. In normal operation, the power  
is split equally between the two PDUs in the cabinet. However, calculate the power load to  
assume delivery from only one PDU to allow the system to continue to operate if one of the two  
AC power sources or PDUs fails.  
“Example of Cabinet Load Calculations” (page 52) lists the weight, power, and thermal  
calculations for a system with:  
Calculating Specifications for Enclosure Combinations  
51  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                       
One c7000 enclosure with 8 NonStop Server Blades  
Two IP or Storage CLIMs  
Two SAS disk enclosures  
One IOAM enclosure  
Two Fibre channel disk modules  
One rack-mounted system console with keyboard/monitor units  
One maintenance switch  
One 42U high cabinet  
For a total thermal load for a system with multiple cabinets, add the heat outputs for all the  
cabinets in the system.  
Table 3-1 Example of Cabinet Load Calculations  
1
2
Component  
Quantity  
Height  
(U)  
Weight  
(lbs)  
Total Volt-amps (VA) BTU/hour  
AC line(s) powered AC line(s) powered  
(kg)  
Single  
4300  
640  
Both  
4400  
740  
Single  
12400  
2140  
Both  
13700  
2472  
c7000 enclosure  
1
2
10  
4
480  
120  
218  
54  
IP or Storage  
CLIM  
SAS disk  
enclosure  
2
1
2
1
4
96  
50  
520  
262  
580  
204  
560  
326  
696  
204  
1738  
893  
1872  
1112  
2374  
696  
IOAM  
enclosure  
11  
6
235  
156  
34  
106  
70  
Fibre Channel  
disk module  
1980  
696  
Rack-mounted  
System Console  
(includes  
keyboard and  
monitor)  
2
15  
Maint. switch  
Cabinet  
1
1
-
1
6
3
44  
44  
150  
-
150  
-
42  
38  
303  
1430  
137  
653  
-
-
Total  
6550  
6970  
19997  
22376  
1
2
Decrease the apparent power VA specification by 508VA for each empty NonStop Server Blade slot. For example,  
a c7000 that only has four NonStop Server Blades installed would be rated 4400 VA minus (4 server blades x 508  
VA) = 2370 VA apparent power.  
Decrease the BTU/hour specification by 1730 BTU/hour for each empty NonStop Server Blade slot. For example, a  
c7000 that only has four NonStop Server Blades installed would be rated 13700 BTU/hour minus (4 server blades x  
1730 BTU/hour) = 6780 BTU/hour.  
52  
System Installation Specifications  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
4 System Configuration Guidelines  
This chapter provides configuration guidelines for a NonStop BladeSystem and includes these  
main topics:  
NonStop BladeSystems use a flexible modular architecture. Therefore, various configurations of  
the systems modular components are possible within configuration restrictions stated in this  
section and Chapter 5 (page 77).  
Internal ServerNet Interconnect Cabling  
This subsection includes:  
Dedicated Service LAN Cables  
The NonStop BladeSystem uses Category 5, unshielded twisted-pair Ethernet cables for the  
internal dedicated service LAN and for connections between the application LAN equipment  
and IP CLIM or IOAM enclosure.  
Length Restrictions for Optional Cables  
Maximum allowable lengths of optional cables connecting to components outside the modular  
cabinet are:  
Connection  
Fiber Type  
Connectors  
Maximum Length  
Product ID  
IOAM enclosure  
(Fibre Channel port)  
to ESS  
MMF  
LC-LC  
250 m  
M8900nn  
IOAM enclosure  
(Fibre Channel port)  
to FC switch  
MMF  
MMF  
LC-LC  
LC-LC  
250 m  
250 m  
M8900nn  
M8900nn  
Storage CLIM  
enclosure (Fibre  
Channel HBA) to FC  
tape  
Storage CLIM  
MMF  
MMF  
LC-LC  
LC-LC  
250 m  
250 m  
M8900nn  
M8900nn  
enclosure (Fibre  
Channel HBA) to ESS  
Storage CLIM  
enclosure (Fibre  
Channel HBA) to FC  
switch  
Storage CLIM  
enclosure (SAS HBA)  
to SAS tape  
N.A.  
SFF-8470 to SFF-8088 6 m  
M8905nn  
Internal ServerNet Interconnect Cabling  
53  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
Connection  
Fiber Type  
Connectors  
Maximum Length  
Product ID  
Storage CLIM  
enclosure (SAS HBA)  
to SAS disk enclosure  
N.A.  
SFF-8470 to SFF-8088 6 m  
M8905nn  
SAS disk enclosure to N.A.  
SAS disk enclosure  
SFF-8088 to SFF-8088 6 m  
M8906nn  
Although a considerable cable length can exist between the modular enclosures in the system,  
HP recommends that cable length between each of the enclosures as short as possible.  
Cable Product IDs  
ServerNet Fabric and Supported Connections  
This subsection includes:  
The Servernet X and Y fabrics for the NonStop BladeSystem are provided by the double-wide  
ServerNet switch in the c7000 enclosure. Each c7000 enclosure requires two ServerNet switches  
for fault tolerance and each switch has four ServerNet connection groups:  
ServerNet Cluster Connections  
ServerNet Fabric Cross-Link Connections  
Interconnections between c7000 enclosures  
I/O Connections (Standard I/O and High I/O options)  
The I/O connectivity to each of these groups is provided by one of two ServerNet switch options:  
either Standard I/O or High I/O.  
ServerNet Cluster Connections  
At J06.03, only standard ServerNet cluster connections via cluster switches using connections to  
both types of ServerNet-based cluster switches (6770 and 6780) is supported. There are two small  
form-factor pluggable (SFP) ports on each c7000 enclosure ServerNet switch: a single mode fiber  
(SMF) port (port 12) and a multi mode fiber (MMF) port (port 11) for the two ServerNet style  
connections. Only one of these ports can be used at a time and only one connection per fabric  
(from the appropriate ServerNet switch for that fabric in group 100) to the system's cluster fabric  
is supported.  
ServerNet cluster connections on NonStop BladeSystems follow the ServerNet cluster and cable  
length rules and restrictions. For more information, see these manuals:  
ServerNet Cluster Supplement for NonStop BladeSystems  
For 6770 switches and star topologies: ServerNet Cluster Manual  
For 6780 switches and layered topology: ServerNet Cluster 6780 Planning and Installation Guide  
54  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
ServerNet Fabric Cross-Link Connections  
A pair of small form-factor pluggable (SFPs) with standard LC-Duplex connectors are provided  
to allow for the ServerNet fabric cross-link connection. Connections are made to ports 9 and 10  
(labeled X1 and X2) on the c7000 enclosure ServerNet switch.  
Interconnections Between c7000 Enclosures  
A single c7000 enclosure can contain eight NonStop Server Blades. Two c7000 enclosures are  
interconnected to create a 16 processor system. These interconnections are provided by two quad  
optic ports — ports 1 and 2 (labeled GA and GB) located on the c7000 enclosure ServerNet  
switches in the 5 and 7 interconnect bays. The GA port on the first c7000 enclosure is connected  
to the GA port on the second c7000 enclosure (same fabric) and then likewise the GB port to the  
GB port. These connections provide eight Servernet cross-links between the two sets of eight  
NonStop processors and the ServerNet routers on the c7000 enclosure ServerNet switch.  
I/O Connections (Standard and High I/O ServerNet Switch Configurations)  
There are two types of c7000 enclosure ServerNet switches: Standard I/O and High I/O. Each  
pair of ServerNet switches in a c7000 enclosure must be identical, either Standard I/O or High  
I/O. However, you can mix ServerNet switches between enclosures.  
The main difference between the Standard I/O or High I/O switches is the number and type of  
quad optics modules that are installed for I/O connectivity.  
The Standard I/O ServerNet switch has three quad optic modules: ports 3, 4, and 8 (labeled GC,  
EA, and EE) for a total of 12 Servernet links as shown following:  
Figure 4-1 ServerNet Switch Standard I/O Supported Connections  
The High I/O ServerNet switch has six quad optic modules — ports 3, 4, 5, 6, 7, and 8 (labelled  
GC, EA, EB, EC, and ED) for a total of 24 Servernet links as shown following. If both c7000  
enclosures in a 16 processor system contain High I/O ServerNet switches, there are a total of 48  
ServerNet connections for I/O.  
ServerNet Fabric and Supported Connections  
55  
Download from Www.Somanuals.com. All Manuals Search And Download.  
             
Figure 4-2 ServerNet Switch High I/O Supported Connections  
Connections to IOAM Enclosures  
The NonStop BladeSystem supports connections to an IOAM Enclosure. The IOAM Enclosure  
requires 4-way Servernet links. If you want 4 IOAMs in the first enclosure, only the ServerNet  
High I/O Switch provides these number of connections, which are available on quad optic ports  
4, 5, 6, and 7 (labelled EA, EB, EC, and ED) as illustrated in Figure 4-2.  
The NonStop BladeSystem supports a maximum of six IOAMs in a NonStop BladeSystem system  
with 16 processors. For a 16 processor system, the connection points are asymmetrical between  
the ServerNet Switches. Only ports EA and EC support connections to an IOAM enclosures on  
the second ServerNet switch. For the Standard I/O ServerNet switch, only one IOAM module  
can be attached per c7000 enclosure. Additionally, if a Standard I/O ServerNet switch is used for  
the first c7000 enclosure for one IOAM enclosure, then the second c7000 enclosure only supports  
one more IOAM enclosure regardless of the type of ServerNet switch (Standard I/O or High I/O).  
Connections to CLIMs  
The NonStop BladeSystem supports a maximum of 24 CLIM modules per system. A CLIM uses  
either one or two ServerNet connections to a fabric. The Storage CLIM typically uses two  
connections per fabric to achieve high disk performance. The IP CLIM typically uses one  
connection per ServerNet fabric. For I/O connections, a breakout cable is used on the back panel  
of the c7000 enclosure ServerNet switch to convert to standard LC-Duplex style connections.  
NonStop BladeSystem Port Connections  
This subsection includes:  
Fibre Channel Ports to Fibre Channel Disk Modules  
Fibre Channel disk modules (FCDMs) can only be connected to the FCSA in an IOAM enclosure.  
FCDMs are directly connected to the Fibre Channel ports on an IOAM enclosure with this  
exception:  
Up to four FCDMs (or up to four daisy-chained configurations with each daisy-chain configuration  
containing 4 FCDMs) can be connected to the FCSA ports on an IOAM enclosure in a NonStop  
Blades System.  
56  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
               
Fibre Channel Ports to Fibre Tape Devices  
Fibre Channel tape devices can be directly connected to the Fibre Channel ports on a Storage  
CLIM or an FCSA in an IOAM enclosure. With a Fibre Channel tape drive connected to the  
system, you can use the BACKUP and RESTORE utilities to save data to and restore data from  
tape.  
SAS Ports to SAS Disk Enclosures  
SAS disk enclosures can be connected directly to the two HBA SAS ports on a Storage CLIM  
with this exception:  
Daisy-chain configurations are not supported.  
SAS Ports to SAS Tape Devices  
SAS tape devices have one SAS port that can be directly connected to the HBA SAS port on a  
Storage CLIM. Each SAS tape enclosure supports two tape drives. With a SAS tape drive connected  
to the system, you can use the BACKUP and RESTORE utilities to save data to and restore data  
from tape.  
Storage CLIM Devices  
This subsection includes:  
The NonStop BladeSystem uses the rack-mounted SAS disk enclosure and its SAS disk drives  
are controlled through the Storage CLIM. This illustration shows the ports on a Storage CLIM:  
NOTE: Both the Storage and IP CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more  
information about the CIP subsystem, see the Cluster I/O Protocols Configuration and Management  
Manual.  
This illustration shows the locations of the hardware in the SAS disk enclosure as well as the I/O  
modules on the rear of the enclosure for connecting to the Storage CLIM.  
Storage CLIM Devices  
57  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                 
SAS disk enclosures connect to Storage CLIMs via SAS cables. For details on cable types, see  
Factory-Default Disk Volume Locations for SAS Disk Devices  
This illustration shows where the factory-default locations for the primary and mirror system  
disk volumes reside in separate disk enclosures:  
Configuration Restrictions for Storage CLIMs  
The maximum number of logical unit numbers (LUNs) for each CLIM, including SAS disks, ESS  
and tapes is 512. Each primary, backup, mirror and mirror backup path is counted in this  
maximum.  
Use only the supported configurations as described below.  
Configurations for Storage CLIM and SAS Disk Enclosures  
These subsections show the supported configurations for SAS Disk enclosures with Storage  
CLIMs:  
Two Storage CLIMs, Two SAS Disk Enclosures  
This illustration shows example cable connections between the two Storage CLIM, two SAS disk  
enclosure configuration:  
58  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
             
Figure 4-3 Two Storage CLIMs, Two SAS Disk Enclosure Configuration  
This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk  
locations in the configuration of two Storage CLIMs and two SAS disk enclosures. In this case,  
$SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes:  
Disk Volume Name  
$SYSTEM  
$DSMSCM  
$AUDIT  
Primary and Mirror-Backup CLIM  
100.2.5.3.1  
Backup and Mirror CLIM  
100.2.5.3.3  
100.2.5.3.1  
100.2.5.3.3  
100.2.5.3.1  
100.2.5.3.3  
$OSS  
100.2.5.3.1  
100.2.5.3.3  
* For an illustration of the factory-default slot locations for a SAS disk enclosure, see “Factory-Default Disk Volume  
Two Storage CLIMs, Four SAS Disk Enclosures  
This illustration shows example cable connections for the two Storage CLIM, four SAS disk  
enclosures configuration:  
Storage CLIM Devices  
59  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Figure 4-4 Two Storage CLIMs, Four SAS Disk Enclosure Configuration  
This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk  
locations in the configuration of two Storage CLIMs and four SAS disk enclosures. In this case,  
$SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes:  
Disk  
Volume  
Name  
Primary  
CLIM  
Backup  
CLIM  
Mirror  
CLIM  
Mirror-Backup Primay  
CLIM LUN  
Mirror Primary Disk Mirror Disk Location  
LUN  
Bay in  
in Mirror SAS  
Primary SAS Enclosure  
Enclosure  
$SYSTEM 100.2.5.3.1  
$DSMSCM 100.2.5.3.1  
$AUDIT 100.2.5.3.1  
100.2.5.4.1 100.2.5.4.3 100.2.5.3.3 101  
100.2.5.4.1 100.2.5.4.3 100.2.5.3.3 102  
100.2.5.4.1 100.2.5.4.3 100.2.5.3.3 103  
100.2.5.4.1 100.2.5.4.3 100.2.5.3.3 104  
101  
102  
103  
104  
1
2
3
4
1
2
3
4
$OSS  
100.2.5.3.1  
Fibre Channel Devices  
This subsection describes Fibre Channel devices and covers these topics:  
The rack-mounted Fibre Channel disk module (FCDM) can only be used with NonStop  
BladeSystems that have IOAM enclosures. An FCDM and its disk drives are controlled through  
the Fibre Channel ServerNet adapter (FCSA). For more information on the FCSA, see the  
Fibre-Channel ServerNet Adapter Installation and Support Guide. For more information on the Fibre  
Channel disk module (FCDM), see “Fibre Channel Disk Module (FCDM)” (page 20). For examples  
of cable connections between FCSAs and FCDMs, see “Example Configurations of the IOAM  
60  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
This illustration shows an FCSA with indicators and ports:  
This illustration shows the locations of the hardware in the Fibre Channel disk module as well  
as the Fibre Channel port connectors at the back of the enclosure:  
Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber  
Channel arbitrated loop (FC-AL) cables. This drawing shows the two Fibre Channel arbitrated  
loops implemented within the Fibre Channel disk module:  
Factory-Default Disk Volume Locations for FCDMs  
This illustration shows where the factory-default locations for the primary and mirror system  
disk volumes reside in separate Fibre Channel disk modules:  
Fibre Channel Devices  
61  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
FCSA location and cable connections vary according to the various controller and Fibre Channel  
disk module combinations.  
Configurations for Fibre Channel Devices  
Storage subsystems in NonStop S-series systems used a fixed hardware layout. Each enclosure  
can have up to four controllers for storage devices and up to 16 internal disk drives. The controllers  
and disk drives always have a fixed logical location with standardized location IDs of  
group-module-slot. Only the group number changes as determined by the enclosure position in  
the ServerNet topology.  
However, the NonStop BladeSystems have no fixed boundaries for the Fibre Channel hardware  
layout. Up to 60 FCSA (or 120 ServerNet addressable controllers) and 240 Fibre Channel disk  
enclosures, with identification depending on the ServerNet connection of the IOAM and slot  
housing in the FCSAs.  
Configuration Restrictions for Fibre Channel Devices  
These configuration restrictions apply and are invoked by Subsystem Control Facility (SCF):  
Primary and mirror disk drives cannot connect to the same Fibre Channel loop. Loss of the  
Fibre Channel loop makes both the primary volume and the mirrored volume inaccessible.  
This configuration inhibits fault tolerance.  
Disk drives in different Fibre Channel disk modules on a daisy chain connect to the same  
Fibre Channel loop.  
The primary path and backup Fibre Channel communication links to a disk drive should  
not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system,  
loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel  
communications path. This configuration is allowed, but only if you override an SCF warning  
message.  
The mirror path and mirror backup Fibre Channel communication links to a disk drive  
should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated  
system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel  
communications path. This configuration is allowed, but only if you override an SCF warning  
message.  
Recommendations for Fibre Channel Device Configuration  
These recommendations apply to FCSA and Fibre Channel disk module configurations:  
Primary Fibre Channel disk module connects to the FCSA F-SAC 1.  
Mirror Fibre Channel disk module connects to the FCSA F-SAC 2.  
FC-AL port A1 is the incoming port from an FCSA or from another Fibre Channel disk  
module.  
FC-AL port A2 is the outbound port to another Fibre Channel disk module.  
FC-AL port B2 is the incoming port from an FCSA or from a Fibre Channel disk module.  
62  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
             
FC-AL port B1 is the outbound port to another Fibre Channel disk module  
In a daisy-chain configuration, the ID expander harness determines the enclosure number.  
Enclosure 1 is always at the bottom of the chain.  
FCSAs can be installed in slots 1 through 5 in an IOAM.  
G4SAs can be installed in slots 1 through 5 in an IOAM.  
In systems with two or more cabinets, primary and mirror Fibre Channel disk modules  
reside in separate cabinets to prevent application or system outage if a power outage affects  
one cabinet.  
With primary and mirror Fibre Channel disk modules in the same cabinet, the primary Fibre  
Channel disk module resides in a lower U than the mirror Fibre Channel disk module.  
Fibre Channel disk drives are configured with dual paths.  
Where possible, FCSAs and Fibre Channel disk modules are configured with four FCSAs  
and four Fibre Channel disk modules for maximum fault tolerance. If FCSAs are not in  
groups of four, the remaining FCSAs and Fibre Channel disk modules can be configured in  
other fault-tolerant configurations such as with two FCSAs and two Fibre Channel disk  
modules or four FCSAs and three Fibre Channel disk modules.  
In systems with one IOAM enclosure:  
With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in  
module 2 of the IOAM enclosure, and the backup FCSA resides in module 3. (See the  
With four FCSAs and four Fibre Channel disk modules, FCSA 1 and FCSA 2 reside in  
module 2 of the IOAM enclosure, and FCSA 3 and FCSA 4 reside in module 3. (See the  
In systems with two or more IOAM enclosures  
With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in  
IOAM enclosure 1, and the backup FCSA resides in IOAM enclosure 2. (See the example  
With four FCSAs and four Fibre Channel disk modules, FCSA 1 and FCSA 2 reside in  
IOAM enclosure 1, and FCSA 3 and FCSA 4 reside in IOAM enclosure 2. (See the example  
Daisy-chain configurations follow the same configuration restrictions and rules that apply  
to configurations that are not daisy-chained. (See “Daisy-Chain Configurations” (page 67).)  
Fibre Channel disk modules containing mirrored volumes must be installed in separate  
daisy chains.  
Daisy-chained configurations require that all Fibre Channel disk modules reside in the same  
cabinet and be physically grouped together.  
Daisy-chain configurations require an ID expander harness with terminators for proper  
Fibre Channel disk module and disk drive identification.  
After you connect all Fibre Channel disk modules in configurations of four FCSAs and four  
Fibre Channel disk modules, yet three Fibre Channel disk modules remain not connected,  
connect them to the four FCSAs. (See the example configuration in “Four FCSAs, Three  
Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module  
These subsections show various example configurations of FCSA controllers and Fibre Channel  
disk modules with IOAM enclosures.  
Fibre Channel Devices  
63  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
NOTE: Although it is not a requirement for fault tolerance to house the primary and mirror  
disk drives in separate FCDMs. the example configurations show FCDMs housing only primary  
or mirror drives, mainly for simplicity in keeping track of the physical locations of the drives.  
Two FCSAs, Two FCDMs, One IOAM Enclosure  
This illustration shows example cable connections between the two FCSAs and the primary and  
mirror Fibre Channel disk modules:  
This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay  
(GMSB) identification for the factory-default system disk locations in the configuration of four  
FCSAs, two Fibre Channel disk modules, and one IOAM enclosure:  
Disk Volume Name  
$SYSTEM (primary)  
$DSMSCM (primary)  
$AUDIT (primary)  
$OSS (primary)  
FCSA GMSP  
Disk GMSB*  
110.211.101  
110.211.102  
110.211.103  
110.211.104  
110.212.101  
110.212.102  
110.212.103  
110.212.104  
110.2.1.1 and 110.3.1.1  
110.2.1.1 and 110.3.1.1  
110.2.1.1 and 110.3.1.1  
110.2.1.1 and 110.3.1.1  
110.2.1.2 and 110.3.1.2  
110.2.1.2 and 110.3.1.2  
110.2.1.2 and 110.3.1.2  
110.2.1.2 and 110.3.1.2  
$SYSTEM (mirror)  
$DSMSCM (mirror)  
$AUDIT (mirror)  
$OSS (mirror)  
* For an illustration of the factory-default slot locations for a Fibre Channel disk module, see “Factory-Default Disk  
Four FCSAs, Four FCDMs, One IOAM Enclosure  
This illustration shows example cable connections between the four FCSAs and the two sets of  
primary and mirror Fibre Channel disk modules:  
64  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay  
(GMSB) identification for the factory-default system disk locations in the configuration of four  
FCSAs, four Fibre Channel disk modules, and one IOAM enclosure:  
1
Disk Volume Name  
$SYSTEM (primary 1)  
$DSMSCM (primary 1)  
$AUDIT (primary 1)  
$OSS (primary 1)  
FCSA GMSP  
Disk GMSB  
110.211.101  
110.211.102  
110.211.103  
110.211.104  
110.212.101  
110.212.102  
110.212.103  
110.212.104  
110.2.1.1 and 110.3.1.1  
110.2.1.1 and 110.3.1.1  
110.2.1.1 and 110.3.1.1  
110.2.1.1 and 110.3.1.1  
110.2.1.2 and 110.3.1.2  
110.2.1.2 and 110.3.1.2  
110.2.1.2 and 110.3.1.2  
110.2.1.2 and 110.3.1.2  
$SYSTEM (mirror 1)  
$DSMSCM (mirror 1)  
$AUDIT (mirror 1)  
$OSS (mirror 1)  
1
For an illustration of the factory-default slot locations for a Fibre Channel disk module, see “Factory-Default Disk  
Two FCSAs, Two FCDMs, Two IOAM Enclosures  
This illustration shows example cable connections between the two FCSAs split between two  
IOAM enclosures and one set of primary and mirror Fibre Channel disk modules:  
Fibre Channel Devices  
65  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay  
(GMSB) identification for the factory-default system disk locations in the configuration of two  
FCSAs, two Fibre Channel disk modules, and two IOAM enclosures:  
1
Disk Volume Name  
$SYSTEM (primary 1)  
$DSMSCM (primary 1)  
$AUDIT (primary 1)  
$OSS (primary 1)  
FCSA GMSP  
Disk GMSB  
110.211.101  
110.211.102  
110.211.103  
110.211.104  
110.212.101  
110.212.102  
110.212.103  
110.212.104  
110.2.1.1 and 111.2.1.1  
110.2.1.1 and 111.2.1.1  
110.2.1.1 and 111.2.1.1  
110.2.1.1 and 111.2.1.1  
110.2.1.2 and 111.2.1.2  
110.2.1.2 and 111.2.1.2  
110.2.1.2 and 111.2.1.2  
110.2.1.2 and 111.2.1.2  
$SYSTEM (mirror 1)  
$DSMSCM (mirror 1)  
$AUDIT (mirror 1)  
$OSS (mirror 1)  
1
For an illustration of the factory-default slot locations for a Fibre Channel disk module, see “Factory-Default Disk  
Four FCSAs, Four FCDMs, Two IOAM Enclosures  
This illustration shows example cable connections between the four FCSAs split between two  
IOAM enclosures and two sets of primary and mirror Fibre Channel disk modules:  
66  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay  
(GMSB) identification for the factory-default system disk locations in the configuration of four  
FCSAs, four Fibre Channel disk modules, and two IOAM enclosures:  
Disk Volume Name  
$SYSTEM (primary)  
$DSMSCM (primary)  
$AUDIT (primary)  
$OSS (primary)  
FCSA GMSP  
Disk GMSB*  
110.211.101  
110.211.102  
110.211.103  
110.211.104  
110.212.101  
110.212.102  
110.212.103  
110.212.104  
110.2.1.1 and 111.2.1.1  
110.2.1.1 and 111.2.1.1  
110.2.1.1 and 111.2.1.1  
110.2.1.1 and 111.2.1.1  
110.2.1.2 and 111.2.1.2  
110.2.1.2 and 111.2.1.2  
110.2.1.2 and 111.2.1.2  
110.2.1.2 and 111.2.1.2  
$SYSTEM (mirror)  
$DSMSCM (mirror)  
$AUDIT (mirror)  
$OSS (mirror)  
* For an illustration of the factory-default slot locations for a Fibre Channel disk module, see “Factory-Default Disk  
Daisy-Chain Configurations  
When planning for possible use of daisy-chained disks, consider:  
1
Daisy-Chained Disks Recommended  
Daisy-Chained Disks Not  
Recommended  
Requirements for Daisy-Chain  
Cost-sensitive storage and  
applications using low-bandwidth  
disk I/O.  
Many volumes in a large Fibre  
All daisy-chained Fibre Channel disk  
Channel loop. The more volumes that modules reside in the same cabinet and  
exist in a larger loop, the higher the are physically grouped together.  
potential for negative impact from a  
failure that takes down a Fibre  
Channel loop.  
Low-cost, high-capacity data storage Applications with a highly mixed  
is important. workload, such as transaction data  
ID expander harness with terminators  
is installed for proper Fibre Channel  
bases or applications with high disk disk module and drive identification.  
I/O.  
Fibre Channel Devices  
67  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
1
Daisy-Chained Disks Recommended  
Daisy-Chained Disks Not  
Recommended  
Requirements for Daisy-Chain  
FCSA for each Fibre Channel loop is  
installed in a different IOAM module  
for fault tolerance.  
Two Fibre Channel disk modules  
minimum, with four Fibre Channel  
disk modules maximum per daisy  
chain.  
1
This illustration shows an example of cable connections between the two FCSAs and four Fibre  
Channel disk modules in a single daisy-chain configuration:  
A second equivalent configuration, including an IOAM enclosure, two FCSAs, four Fibre Channel  
disk modules with an ID expander, is required for fault-tolerant mirrored disk storage. Installing  
each mirrored disk in the same corresponding FCDM and bay number as its primary disk in not  
required, but it is recommend to simplify the physical management and identification of the  
disks.  
This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay  
(GMSB) identification for the factory-default system disk locations in a daisy-chained  
configuration:  
Disk Volume Name  
$SYSTEM  
FCSA GMSP  
Disk GMSB*  
110.211.101  
110.211.102  
110.211.103  
110.2.1.1 and 110.3.1.1  
110.2.1.1 and 110.3.1.1  
110.2.1.1 and 110.3.1.1  
$DSMSCM  
$AUDIT  
68  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Disk Volume Name  
FCSA GMSP  
Disk GMSB*  
$OSS  
110.2.1.1 and 110.3.1.1  
110.211.104  
* For an illustration of the factory-default slot locations for a Fibre Channel disk module, see “Factory-Default Disk  
Four FCSAs, Three FCDMs, One IOAM Enclosure  
This illustration shows example cable connections between the four FCSAs and three Fibre  
Channel disk modules with the primary and mirror drives split within each Fibre Channel disk  
module:  
This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay  
(GMSB) identification for the factory-default disk volumes for the configuration of four FCSAs,  
three Fibre Channel disk modules, and one IOAM enclosure:  
Disk Volume Name  
$SYSTEM (primary 1)  
$DSMSCM (primary 1)  
$AUDIT (primary 1)  
$OSS (primary 1)  
FCSA GMSP  
Disk GMSB  
110.212.101  
110.212.101  
110.212.101  
110.212.101  
110.221.108  
110.221.109  
110.2.1.2 and 110.3.1.2  
110.2.1.2 and 110.3.1.2  
110.2.1.2 and 110.3.1.2  
110.2.1.2 and 110.3.1.2  
110.2.2.1 and 110.3.2.1  
110.2.2.1 and 110.3.2.1  
$SYSTEM (mirror 1)  
$DSMSCM (mirror 1)  
Fibre Channel Devices  
69  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Disk Volume Name  
$AUDIT (mirror 1)  
$OSS (mirror 1)  
FCSA GMSP  
Disk GMSB  
110.221.110  
110.221.111  
110.2.2.1 and 110.3.2.1  
110.2.2.1 and 110.3.2.1  
This illustration shows the factory-default locations for the configurations of four FCSAs and  
three Fibre Channel disk modules where the primary system file disk volumes are in Fibre  
Channel disk module 1:  
This illustration shows the factory-default locations for the configurations of four FCSAs with  
three Fibre Channel disk modules where the mirror system file disk volumes are in Fibre Channel  
disk module 3:  
Ethernet to Networks  
Depending on your configuration, the Ethernet ports in an IP CLIM or a G4SA installed in an  
IOAM enclosure provide Gigabit connectivity between NonStop BladeSystems and Ethernet  
LANs. The Ethernet port is an end node on the ServerNet and uses either fiber-optic or copper  
cable for connectivity to user application LANs, as well as for the dedicated service LAN.  
For information on the Ethernet ports on a G4SA installed in an IOAM enclosure, see the Gigabit  
Ethernet 4-Port Adapter (G4SA) Installation and Support Guide.  
The IP CLIM has two types of Ethernet configurations: IP CLIM A and IP CLIM B.  
This illustration shows the Ethernet ports and ServerNet fabric connections on an IP CLIM with  
the IP CLIM A configuration:  
70  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
This illustration shows the Ethernet ports and ServerNet fabric connections on an IP CLIM with  
the IP CLIM B configuration:  
Both the IP and Storage CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more  
information about managing your CLIMs using the CIP subsystem, see the Cluster I/O Protocols  
Configuration and Management Manual.  
Managing NonStop BladeSystem Resources  
This subsection provides procedures and information for managing your NonStop BladeSystem  
resources and includes these topics:  
Changing Customer Passwords  
NonStop BladeSystems are shipped with default user names and default passwords for the  
Administrator for certain components and software. Once your system is set up, you should  
change these passwords to your own passwords.  
Managing NonStop BladeSystem Resources  
71  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Table 4-1 Default User Names and Passwords  
To change this password,  
see...  
NonStop Blade System Component  
Default User Name  
Default Password  
Onboard Administrator (OA)  
Admin  
hpnonstop  
CLIM iLO  
Admin  
root  
hpnonstop  
hpnonstop  
hpnonstop  
CLIM Maintenance Interface (eth01)  
NonStop Server Blade MP (iLO)  
Admin  
Remote Desktop  
Admin  
(None)  
Change the Onboard Administrator (OA) Password  
To change the OA password:  
1. Login to the OA. (You can use the Launch OA URL action on the processor blade from the  
OSM Service Connection.)  
2. Click the + (plus sign) in front of the Enclosure informationon the left.  
3. Click the + (plus sign) in front of Users/Authentication.  
4. Click Local Usersand all users are displayed on the right side.  
5. Select Administratorand click Edit.  
6. Enter the new password, then confirm it again. Click update user.  
7. Keep track of your OA password.  
8. Change the password for each OA.  
Change the CLIM iLO Password  
To change the CLIM iLO password:  
1. In OSM, right click on the CLIM and select Actions.  
2. In the next screen, in the Available Actionsdrop-down window, select Invoke iLO  
and click Perform Action.  
3. Select the Administrationtab.  
4. Select User Administration.  
5. Select Admin local user.  
6. Select View/Modify.  
7. Change the password.  
8. Click Save User Information.  
9. Keep track of your CLIM iLO password.  
10. Change the iLO password for each CLIM.  
Change the Maintenance Interface (Eth0) Password  
To change the maintenance interface (eth0) password:  
72  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                 
1. From the NonStop host system, enter the climcmd command for password:  
>climcmd clim-name, ip-address, or host-name passwd  
It will ask for password two times. For example:  
$SYSTEM STARTUP 3> climcmd c1002531 passwd  
comForte SSH client version T9999H06_11Feb2008_comForte_SSH_0078  
Enter new UNIX password: hpnonstop  
Retype new UNIX password: hpnonstop  
passwd: password updated successfully  
Termination Info: 0  
2. Change the maintenace interface (eth0) password for each CLIM.  
The user name and password for the eth0:0 maintenance provider are the standard NonStop  
host system ones, for example, super.super, and so on. Other than standard procedures for setting  
up NonStop host system user names and passwords, nothing further is required for the eth0:0  
maintenance provider passwords.  
Change the NonStop ServerBlade MP (iLO) Password  
To change the NonStop Server Blade MP (iLO) password:  
1. Login to the ILO (You can use the Launch iLO URL action on the processor blade from the  
OSM Service Connection.)  
2. Select the Administrationtab.  
3. Click Local Accountsfrom the left side window.  
4. Select the user on the right-hand side and click the Add/Editbutton below.  
5. In the new page, enter the new password in the Password confirmation fields, and click  
Submit.  
6. Keep track of your NonStop ServerBlade MP (iLO) password.  
7. Change the password for each NonStop ServerBlade MP.  
Change the Remote Desktop Password  
You must change the Remote Desktop Administrator's password to enable connections to the  
NonStop system console. To change the password for the Administrator's account (which you  
have logged onto):  
1. Press the Ctrl+Alt+Del keys and the Windows Security dialogue appears.  
2. Click Change Password.  
3. In the Change Password window:  
a. Enter the old password.  
b. Enter the new password.  
c. Click OK.  
Default Naming Conventions  
The NonStop BladeSystem implements default naming conventions in the same manner as  
Integrity NonStop NS-series systems.  
With a few exceptions, default naming conventions are not necessary for the modular resources  
that make up a NonStop BladeSystem. In most cases, users can name their resources at will and  
use the appropriate management applications and tools to find the location of the resource.  
However, default naming conventions for certain resources simplify creation of the initial  
configuration files and automatic generation of the names of the modular resources.  
Managing NonStop BladeSystem Resources  
73  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
Preconfigured default resource names are:  
Type of Object  
Naming Convention  
Example  
Description  
CLuster I/O Module (CLIM) Cgroup module slot  
C1002532  
CLIM that has an X1  
attachment point of fiber on  
the ServerNet switch port  
located in group 100,  
module 2, slot 5, port 3, and  
fiber 2  
fiber  
SAS disk volume  
ESS disk volume  
Fiber Channel disk drive  
Tape drive  
$SASnumber  
$ESSnumber  
$FCnumber  
$SAS20  
$ESS20  
$FC10  
Twentieth SAS disk volume  
in the system  
Twentieth ESS disk drive in  
the system  
Tenth Fibre Channel disk  
drive in the system  
$TAPEnumber  
$ZTCPnumber  
ZTCPnumber  
$TAPE01  
$ZTCP0  
ZTCP0  
First tape drive in the  
system  
Maintenance CIPSAM  
process  
First maintenance CIPSAM  
process for the system  
Maintenance provider  
First maintenance provider  
for the system, associated  
with the CIPSAM process  
$ZTCP0  
Maintenance CIPSAM  
process  
$ZTCPnumber  
ZTCPnumber  
$ZTCP1  
ZTCP1  
Second maintenance  
CIPSAM process for the  
system  
Maintenance provider  
Second maintenance  
provider for the system,  
associated with the CIPSAM  
process $ZTCP1  
IPDATA CIPSAM process $ZTC number  
$ZTC0  
ZTC0  
First IPDATA CIPSAM  
process for the system  
IPDATA provider  
$ZTC number  
First IPDATA provider for  
the system  
Maintenance Telserv  
process  
$ZTNP number  
$ZTNP1  
Second maintenance Telserv  
process for the system that  
is associated with the  
CIPSAM $ZTCP1 process  
Non-maintenance Telserv $ZTN number  
$ZTN0  
$ZPRP1  
$LSN0  
First non-maintenance  
Telserv process for the  
system that is associated  
with the CIPSAM $ZTC0  
process  
process  
Listener process  
$ZPRPnumber  
Second maintenance  
Listener process for the  
system that is associated  
with the CIPSAM $ZTC1  
process  
Non-maintenance Listener $LSN number  
process  
First non-maintenance  
Listener process for the  
system that is associated  
with the CIPSAM $ZTC0  
process  
74  
System Configuration Guidelines  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Type of Object  
Naming Convention  
Example  
Description  
TFTP process  
Automatically created by  
WANMGR  
None  
None  
WANBOOT process  
SWAN adapter  
Automatically created by  
WANMGR  
None  
S19  
None  
Snumber  
Nineteenth SWAN adapter  
in the system  
Possible Values of Disk and Tape LUNs  
The possible values of disk and tape LUN numbers depend on the type of the resource.  
For a SAS disk, the LUN number is calculated as base LUN + offset.  
base LUN is the base LUN number for the SAS enclosure. Its value can be 100, 200, 300,  
400, 500, 600, 700, 800, or 900, and should be numbered sequentially for each of the SAS  
enclosures attached to the same CLIM.  
offset is the bay (slot) number of the disk in the SAS enclosure.  
For an ESS disk, the LUN number is calculated as base LUN + offset.  
base LUN is the base LUN number for the ESS port. Its value can be 1000, 1500, 2000, 2500,  
3000, 3500, 4000, or 4500, and should be numbered sequentially for each of the ESS ports  
attached to the same CLIM.  
offset is the LUN number of the ESS LUN.  
For a physical Fibre Channel tape, the value of LUN number can be 1, 2, 3, 4, 5, 6, 7, 8, or 9,  
and should be numbered sequentially for each of the physical tapes attached to the same  
CLIM.  
For a VTS tape, the LUN number is calculated as base LUN + offset.  
base LUN is the base LUN number for the VTS port. Its value can be 5000, 5010, 5020, 5030,  
5040, 5050, 5060, 5070, 5080, or 5090, and should be numbered sequentially for each of the  
VTS ports attached to the same CLIM.  
offset is the LUN number of the VTS LUN.  
Managing NonStop BladeSystem Resources  
75  
Download from Www.Somanuals.com. All Manuals Search And Download.  
       
76  
Download from Www.Somanuals.com. All Manuals Search And Download.  
5 Hardware Configuration in Modular Cabinets  
This chapter shows locations of hardware components within the 42U modular cabinet for a  
NonStop BladeSystem. A number of physical configurations are possible because of the flexibility  
inherent to the NonStop Multicore Architecture and ServerNet network.  
NOTE: Hardware configuration drawings in this chapter represent the physical arrangement  
of the modular enclosures but do not show PDUs. For information about PDUs, see “Power  
Maximum Number of Modular Components  
This table shows the maximum number of the modular components installed in a BladeSystem.  
These values might not reflect the system you are planning and are provided only as an example,  
not as exact values.  
2–Processors  
4–Processors  
6–Processors  
8–Processors  
c7000 enclosure  
1
2
1
2
1
2
1
2
ServerNet switch in  
c7000 enclosure  
1
IOAM enclosure  
4
4
4
4
2
CLIMs  
24  
24  
24  
24  
1
2
The IOAM maximum requires ServerNet High I/O Switches  
The CLIM maximum requires ServerNet High I/O Switches  
Enclosure Locations in Cabinets  
This table provides details about the location of NonStop BladeSystem enclosures and components  
within a cabinet. The enclosure location refers to the U location on the rack where the lower edge  
of the enclosure resides, such as the bottom of a system console at 20U.  
Enclosure or Component  
Height (U)  
Required Cabinet (Rack)  
Location  
Notes  
PDUs  
N/A  
AC power cord for the PDU For top feed AC (with and  
exiting out the top rear without the optional UPS).  
corner AC power cord for For bottom feed AC (with  
the PDU exiting out the  
bottom rear corner  
and without the optional  
UPS).  
HP R12000/3 UPS  
6U  
Bottom U of rack  
The UPS and any ERMs  
must be installed in the  
bottom U of the rack to  
avoid tipping and stability  
issues.  
Extended runtime module 3U  
(ERM)  
Immediately above UPS  
Up to three ERMs can be  
(and first ERM if two ERMs installed.  
installed)  
Maximum Number of Modular Components  
77  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
Enclosure or Component  
Height (U)  
Required Cabinet (Rack)  
Location  
Notes  
Cabinet stabilizer  
N/A  
Bottom front exterior of  
cabinet  
Required when you have  
less than four cabinets  
bayed together. Cabinet  
stabilizer is not required  
when cabinet is bolted to its  
adjacent cabinet.  
c7000 enclosure  
IP CLIM  
10U  
2U  
Must be installed at U9 There is a limit of one  
when there is no UPS  
installed c7000 enclosure  
per cabinet.  
Must be installed at U11  
when there is a UPS and  
ERM  
Any available 2U space.  
Upper U locations are  
recommended.  
IP CLIMs should be  
adjacent to one another in a  
group of four, so the CLIMs  
can share one quad optic  
port on the c7000 ServerNet  
switch.  
Storage CLIM  
2U  
Any available 2U space.  
Upper U locations are  
recommended.  
Storage CLIMs and disk  
enclosures should be  
adjacent to one another.  
Storage CLIMs should be  
adjacent to one another  
in a group of two, so the  
CLIMs can share one  
quad optic port on the  
c7000 ServerNet switch.  
SAS disk enclosure  
IOAM enclosure  
2U  
Any available 2U space.  
disk enclosures and Storage  
Middle or upper U locations CLIMs should be adjacent  
are recommended.  
to one another.  
11U  
Any available 11U space.  
Middle or upper U locations be adjacent to one another.  
are recommended.  
IOAMs and FCDMs should  
Fibre Channel disk module 3U  
(FCDM)  
Any available 3U space.  
IOAMs and FCDMs should  
Middle or upper U locations be adjacent to one another.  
are recommended.  
Restricted service clearances  
might exist with a Fibre  
Channel disk module  
installed adjacent to the  
maintenance switch.  
System console  
2U  
1U  
U20 is recommended.  
Operations and service  
personnel can use the  
console best at the middle  
U locations.  
Maintenance switch  
Any available 1U space.  
Top of cabinet is  
recommended.  
Typical Configuration  
Figure 5-1 (page 79) shows the U locations in the 42U modular cabinet of some of the hardware  
components that can be installed in the modular cabinet.  
78  
Hardware Configuration in Modular Cabinets  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                   
Figure 5-1 42U Configuration  
These options can be installed in locations marked Configurable Space in the configuration  
drawings:  
Maintenance switch: 1U required, preferably at the top of the cabinet when there is no UPS  
or the bottom of the cabinet when a UPS is present.  
Console: 2U required, with recommended installation at cabinet offset U20 when there is  
no UPS or U21 when a UPS is present.  
Fibre Channel disk module: 3U required  
A second cabinet is required when:  
Typical Configuration  
79  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
A second c7000 enclosure is needed for additional NonStop Server Blades or other  
components.  
Additional SAS disk enclosures and FCDMs are needed for storage, but space doesn't exist  
in the cabinet.  
Space for optional components exceeds the capacity of the cabinet.  
80  
Hardware Configuration in Modular Cabinets  
Download from Www.Somanuals.com. All Manuals Search And Download.  
6 Maintenance and Support Connectivity  
Local monitoring and maintenance of the NonStop BladeSystem occurs over the dedicated service  
LAN. The dedicated service LAN provides connectivity between the system console and the  
maintenance infrastructure in the system hardware. Remote support is provided by OSM, which  
runs on the system console and communicates over the HP Instant Support Enterprise Edition  
infrastructure or an alternative remote access solution.  
Only components specified by HP can be connected to the dedicated LAN. No other access to  
the LAN is permitted.  
The dedicated service LAN uses a ProCurve 2524 Ethernet switch for connectivity between the  
c7000 enclosure, CLIMs, IOAM enclosures, and the system console.  
The HP ISEE call-out and call-in access is provided by the hpVPN Cisco 831 router, which connects  
to the customer internet access. Alternatively, call-out and call-in access is provided by a modem.  
NOTE: Your account representative must place a separate order of the ISEE VPN router with  
the assistance of the ISEE team.  
An important part of the system maintenance architecture, the system console is a personal  
computer (PC) purchased from HP to run maintenance and diagnostic software for NonStop  
BladeSystems. Through the system console, you can:  
Monitor system health and perform maintenance operations using the HP NonStop Open  
System Management (OSM) interface  
View manuals and service procedures  
Run HP Tandem Advanced Command Language (TACL) sessions using terminal-emulation  
software  
Install and manage system software using the Distributed Systems Management/Software  
Configuration Manager (DSM/SCM)  
Make remote requests to and receive responses from a system using remote operation  
software  
Dedicated Service LAN  
A NonStop BladeSystem requires a dedicated LAN for system maintenance through OSM. Only  
components specified by HP can be connected to a dedicated LAN. No other access to the LAN  
is permitted.  
This subsection includes:  
Basic LAN Configuration  
A basic dedicated service LAN that does not provide a fault-tolerant configuration requires  
connection of these components to the ProCurve 2524 maintenance switch installed in the modular  
cabinet as shown in example :  
Dedicated Service LAN  
81  
Download from Www.Somanuals.com. All Manuals Search And Download.  
               
One connection for each system console running OSM  
One connection to each of the two Onboard Administrators (OAs) in each c7000 enclosure  
One connection to each of the two Interconnect Ethernet switches in each c7000 enclosure  
One connection to the maintenance interface (eth0) for each IP and Storage CLIM.  
One connection to the iLO interface for each IP CLIM and Storage CLIM  
One connection to each of the ServerNet switch boards in each IOAM enclosure, and  
optionally, two connections to two G4SAs in the system (if the NonStop maintenance LAN  
is implemented using G4SAs)  
UPS (optional) for power-fail monitoring  
Figure 6-1 Example of a Basic LAN Configuration With One Maintenance Switch  
82  
Maintenance and Support Connectivity  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Fault-Tolerant LAN Configuration  
HP recommends that you use a fault-tolerant LAN configuration. A fault-tolerant configuration  
includes these connections to two maintenance switches as shown in example Figure 6-2 (page 84):  
A system console to each maintenance switch  
One connection from one Onboard Administrator (OA) in the c7000 enclosure to one  
maintenance switch, and another connection from the other Onboard Administrator to the  
second maintenance switch  
One connection from one Interconnect Ethernet switch in the c7000 enclosure to one  
maintenance switch, and another connection from the other Interconnect Ethernet switch  
to the second maintenance switch  
For every CLIM pair, connect the iLO and eth0 ports of the primary CLIM to one maintenance  
switch, and the iLO and eth0 ports of the backup CLIM to the second maintenance switch  
For IP CLIMs, the primary and backup CLIMs are defined, based on the CLIM-to-CLIM  
failover configuration  
For Storage CLIMs, the primary and backup CLIMs are defined, based on the disk path  
configuration  
A Storage CLIM to one maintenance switch and another Storage CLIM to the other  
maintenance switch  
One of the two IOAM enclosure ServerNet switch boards to each maintenance switch  
(optional)  
If CLIMs are used to configure the maintenance LAN, connect the CLIM that configures  
$ZTCP0 to one maintenance switch, and connect the other CLIM that configures $ZTCP1  
to the second maintenance switch  
If G4SAs are used to configure the maintenance LAN, connect the CLIM that configures  
$ZTCP0 to one maintenance switch, and connect the other CLIM that configures $ZTCP1  
to the second maintenance switch  
Dedicated Service LAN  
83  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
Figure 6-2 Example of a Fault-Tolerant LAN Configuration With Two Maintenance Switches  
IP Addresses  
NonStop BladeSystems require Internet protocol (IP) addresses for these components that are  
connected to the dedicated service LAN:  
c7000 enclosure ServerNet switches  
IOAM enclosure ServerNet switch boards  
Maintenance switches  
System consoles  
OSM Service Connection  
UPS (optional)  
84  
Maintenance and Support Connectivity  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
NOTE: Factory-default IP addresses for G4SAs are in the LAN Configuration and Management  
Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and  
Management Manual.  
These components have default IP addresses that are preconfigured at the factory. You can  
change these preconfigured IP addresses to addresses appropriate for your LAN environment:  
Component  
Location  
Default IP Address  
Primary system console (rack-mounted N/A  
or stand-alone)  
192.168.36.1  
Backup system console (rack-mounted N/A  
only)  
192.168.36.2  
192.168.36.21  
192.168.36.22  
Maintenance switch (ProCurve 2524) N/A  
(First switch)  
Maintenance switch (ProCurve 2524) N/A  
(Second switch)  
Onboard Administrators in c7000  
enclosure  
Assigned by DHCP server on the  
NonStop system console  
CLIM iLOs  
Assigned by DHCP server on the  
NonStop system console  
Server Blade iLOs  
Assigned through Enclosure Bay IP  
Addressing (EBIPA)  
ServerNet switches in c7000 enclosure  
(OSM Low-Level Link)  
Assigned through Enclosure Bay IP  
Addressing (EBIPA)  
Interconnect Ethernet switches  
Assigned through Enclosure Bay IP  
Addressing (EBIPA)  
CLIM Maintenance Interfaces  
CLIM at 100.2.5.3.1  
192.168.38.31  
192.168.38.32  
192.168.38.33  
192.168.38.34  
192.168.38.41  
192.168.38.42  
192.168.38.43  
192.168.38.44  
192.168.38.51  
192.168.38.52  
192.168.38.53  
192.168.38.54  
192.168.38.61  
192.168.38.62  
192.168.38.63  
192.168.38.64  
192.168.38.71  
CLIM at 100.2.5.3.2  
CLIM at 100.2.5.3.3  
CLIM at 100.2.5.3.4  
CLIM at 100.2.5.4.1  
CLIM at 100.2.5.4.2  
CLIM at 100.2.5.4.3  
CLIM at 100.2.5.4.4  
CLIM at 100.2.5.5.1  
CLIM at 100.2.5.5.2  
CLIM at 100.2.5.5.3  
CLIM at 100.2.5.5.4  
CLIM at 100.2.5.6.1  
CLIM at 100.2.5.6.2  
CLIM at 100.2.5.6.3  
CLIM at 100.2.5.6.4  
CLIM at 100.2.5.7.1  
Dedicated Service LAN  
85  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Component  
Location  
Default IP Address  
192.168.38.72  
192.168.38.73  
192.168.38.74  
192.168.38.81  
192.168.38.82  
192.168.38.83  
192.168.38.84  
192.168.38.31  
192.168.38.32  
192.168.38.33  
192.168.38.34  
192.168.38.41  
192.168.38.42  
192.168.38.43  
192.168.38.44  
192.168.38.51  
192.168.38.52  
192.168.38.53  
192.168.38.54  
192.168.38.61  
192.168.38.62  
192.168.38.63  
192.168.38.64  
192.168.38.71  
192.168.38.72  
192.168.38.73  
192.168.38.74  
192.168.38.81  
192.168.38.82  
192.168.38.83  
192.168.38.84  
CLIM at 100.2.5.7.2  
CLIM at 100.2.5.7.3  
CLIM at 100.2.5.7.4  
CLIM at 100.2.5.8.1  
CLIM at 100.2.5.8.2  
CLIM at 100.2.5.8.3  
CLIM at 100.2.5.8.4  
CLIM at 101.2.5.3.1  
CLIM at 101.2.5.3.2  
CLIM at 101.2.5.3.3  
CLIM at 101.2.5.3.4  
CLIM at 101.2.5.4.1  
CLIM at 101.2.5.4.2  
CLIM at 101.2.5.4.3  
CLIM at 101.2.5.4.4  
CLIM at 101.2.5.5.1  
CLIM at 101.2.5.5.2  
CLIM at 101.2.5.5.3  
CLIM at 101.2.5.5.4  
CLIM at 101.2.5.6.1  
CLIM at 101.2.5.6.2  
CLIM at 101.2.5.6.3  
CLIM at 101.2.5.6.4  
CLIM at 101.2.5.7.1  
CLIM at 101.2.5.7.2  
CLIM at 101.2.5.7.3  
CLIM at 101.2.5.7.4  
CLIM at 101.2.5.8.1  
CLIM at 101.2.5.8.2  
CLIM at 101.2.5.8.3  
CLIM at 101.2.5.8.4  
86  
Maintenance and Support Connectivity  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Component  
Location  
110.2.14  
110.3.14  
111.2.14  
111.3.14  
112.2.14  
112.3.14  
113.2.14  
113.3.14  
114.2.14  
114.3.14  
115.2.14  
115.3.14  
Rack 01  
Rack 02  
Rack 03  
Rack 04  
Rack 05  
Rack 06  
Rack 07  
Rack 08  
Default IP Address  
192.168.36.222  
192.168.36.223  
192.168.36.224  
192.168.36.225  
192.168.36.226  
192.168.36.227  
192.168.36.228  
192.168.36.229  
192.168.36.230  
192.168.36.231  
192.168.36.232  
192.168.36.233  
192.168.36.31  
192.168.36.32  
192.168.36.33  
192.168.36.34  
192.168.36.35  
192.168.36.36  
192.168.36.37  
192.168.36.38  
255.255.0.0  
IOAM enclosure (ServerNet switch  
boards)  
UPS (rack-mounted only)  
Onboard Administrator EBIPA  
settings:  
First enclosure device bay subnet  
mask  
First enclosure device bay IP  
addresses  
192.168.36.40 through 192.168.36.55  
255.255.0.0  
First enclosure interconnect bay  
subnet mask  
First enclosure interconnect bay IP  
addresses  
192.168.36.60 through 192.168.36.67  
Second enclosure device bay subnet 255.255.0.0  
mask  
Second enclosure device bay IP  
addresses  
192.168.36.70 through 192.168.36.85  
Second enclosure interconnect bay  
subnet mask  
255.255.0.0  
Second enclosure interconnect bay IP 192.168.36.90 through 192.168.36.97  
addresses  
NonStop system console DHCP server Primary system console starting IP  
192.168.31.1  
settings:  
address  
Primary system console ending IP  
address  
192.168.31.254  
Primary system console subnet mask 255.255.0.0  
Dedicated Service LAN  
87  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Component  
Location  
Default IP Address  
Backup system console starting IP  
address  
192.168.32.1  
Backup system console ending IP  
address  
192.168.32.254  
Backup system console subnet mask 255.255.0.0  
TCP/IP processes for OSM Service Connection:  
$ZTCP0  
192.168.36.10  
255.255.0.0 subnet mask  
$ZTCP1  
192.168.36.11  
255.255.0.0 subnet mask  
Ethernet Cables  
Ethernet connections for a dedicated service LAN require Category 5 unshielded twisted-pair  
(UTP) cables. For supported cables, see Appendix A (page 93).  
SWAN Concentrator Restrictions  
Isolate any ServerNet wide area networks (SWANs) on the system. The system must be  
equipped with at least two LANs: one LAN for SWAN concentrators and one for the  
dedicated service LAN.  
Most SWAN concentrators are configured redundantly using two or more subnets. Those  
subnets also must be isolated from the dedicated service LAN.  
Do not connect SWANs on a subnet containing a DHCP.  
Dedicated Service LAN Links Using G4SAs  
You can implement system-up service LAN connectivity using G4SAs or IP CLIMs. The values  
in this table show the identification for G4SAs in slot 5 of both modules of an IOAM enclosure  
and connected to the maintenance switch:  
GMS for G4SA  
Location in IOAME  
G4SA PIF  
G4SA PIF  
TCP/IP Stack  
IP Configuration  
110.2.5  
G11025.0.A  
L1102R  
$ZTCP0  
IP: 192.168.36.10  
Subnet:  
%hFFFF0000  
Hostname: osmlanx  
110.3.5  
G11035.0.A  
L1103R  
$ZTCP1  
IP: 192.168.36.11  
Subnet:  
%hFFFF0000  
Hostname: osmlany  
88  
Maintenance and Support Connectivity  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
NOTE: For a fault-tolerant dedicated service LAN, two G4SAs are required, with each G4SA  
connected to a separate maintenance switch. These G4SA can reside in modules 2 and 3 of the  
same IOAM enclosure or in module 2 of one IOAM enclosure and module 3 of a second IOAM  
enclosure. When the G4SA provides connection to the dedicated service LAN, use the slower  
10/100 Mbps PIF A rather than one of the high-speed 1000 Mbps Ethernet ports of PIF C or D.  
Dedicated Service LAN Links Using IP CLIMs  
You can implement up-system service LAN connectivity using IP CLIMs, if the system has at  
least two IP CLIMs. The values in this table show the identification for the CLIMs in a NonStop  
BladeSystem and connected to the maintenance switch. In this table, a CLIM named C1002581  
is connected to the first fiber and eighth port of the ServerNet switch in Group 100, module 2,  
interconnect bay 5 of a c7000 enclosure:  
CLIM Location  
TCP/IP Stack  
IP Configuration  
100.2.5.8.1  
$ZTCP0  
IP: 192.168.36.10  
Subnet:  
%hFFFF0000  
Hostname: osmlanx  
100.2.5.8.2  
$ZTCP1  
IP: 192.168.36.11  
Subnet:  
%hFFFF0000  
Hostname: osmlany  
NOTE: For a fault-tolerant dedicated service LAN, two IP CLIMs are required, with each IP  
CLIM connected to a separate maintenance switch.  
Initial Configuration for a Dedicated Service LAN  
New systems are shipped with an initial set of IP addresses configured. For a listing of these  
initial IP addresses, see “IP Addresses” (page 84).  
Factory-default IP addresses for the G4SAs are in the LAN Configuration and Management Manual.  
IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management  
Manual.  
HP recommends that you change these preconfigured IP addresses to addresses appropriate for  
your LAN environment. You must change the preconfigured IP addresses on:  
A backup system console if you want to connect it to a dedicated service LAN that already  
includes a primary system console or other system console  
Any system console if you want to connect it to a dedicated service LAN that already includes  
a primary system console  
Keep track of all the IP addresses in your system so that no IP address is assigned twice.  
System Consoles  
New system consoles are preconfigured with the required HP and third-party software. When  
upgrading to the latest RVU, you can install software upgrades from the HP NonStop System  
Console Installer DVD image.  
Some system console hardware, including the PC system unit, monitor, and keyboard, can be  
mounted in the cabinet. Other PCs are installed outside the cabinet and require separate provisions  
or furniture to hold the PC hardware.  
System Consoles  
89  
Download from Www.Somanuals.com. All Manuals Search And Download.  
           
System consoles communicate with NonStop BladeSystems over a dedicated service local area  
network (LAN) or a secure operations LAN. A dedicated service LAN is required for use of OSM  
Low-Level Link and Notification Director functionality, which includes configuring primary  
and backup dial-out points (referred to as the primary and backup system consoles, respectively).  
HP recommends that you also configure the backup dedicated service LAN with a backup system  
console.  
System Console Configurations  
Several system console configurations are possible:  
One System Console Managing One System (Setup Configuration)  
The one system console on the LAN must be configured as the primary system console. This  
configuration can be called the setup configuration and is used during initial setup and installation  
of the system console and the server.  
The setup configuration is an example of a secure, stand-alone network as shown in Figure 6-1  
(page 82). A LAN cable connects the primary system console to the maintenance switch, and  
additional LAN cables connect the switches and Ethernet ports. The maintenance switch or an  
optional second maintenance switch allows you to later add a backup system console and  
additional system consoles.  
NOTE: Because the system console and maintenance switch are single points of failure that  
could disrupt access to OSM, this configuration is not recommended for operations that require  
high availability or fault tolerance.  
When you use this configuration, you do not need to change the preconfigured IP addresses.  
Primary and Backup System Consoles Managing One System  
This configuration is recommended. It is similar to the setup configuration, but for fault-tolerant  
redundancy, it includes a second maintenance switch, backup system console, and second modem  
(if a modem-based remote solution is used). The maintenance switches provide a dedicated LAN  
in which all systems use the same subnet. Figure 6-2 (page 84)shows a fault-tolerant configuration  
without modems.  
NOTE: A subnet is a network division within the TCP/IP model. Within a given network, each  
subnet is treated as a separate network. Outside that network, the subnets appear as part of a  
single network. The terms subnet and subnetwork are used interchangeably.  
If a remote maintenance LAN connection is required, use the second network interface card  
(NIC) in the NonStop system console to connect to the operations LAN, and access the other  
devices in the maintenance LAN using Remote Desktop via the console.  
Because this configuration uses only one subnet, you must:  
Enable Spanning Tree Protocol (STP) in switches or routers that are part of the operations  
LAN.  
90  
Maintenance and Support Connectivity  
Download from Www.Somanuals.com. All Manuals Search And Download.  
         
NOTE: Do not perform the next two bulleted items if your backup system console is shipped  
with a new NonStop BladeSystem. In this case, HP has already configured these items for  
you.  
Change the preconfigured DHCP configuration of the backup system console before you  
add it to the LAN.  
Change the preconfigured IP address of the backup system console before you add it to the  
LAN.  
CAUTION: Networks with more than one path between any two systems can cause loops  
that result in message duplication and broadcast storms that can bring down the network.  
If a second connection is used, refer to the documentation for the ProCurve 2524 maintenance  
switch and enable STP in the maintenance switches. STP ensures only one active path at any  
given moment between two systems on the network. In networks with two or more physical  
paths between two systems, STP ensures only one active path between them and blocks all  
other redundant paths.  
Multiple System Consoles Managing One System  
Two maintenance switches provide fault tolerance and extra ports for adding system consoles.  
You must change the preconfigured IP addresses of the second and subsequent system consoles  
before you can add them to the LAN. Only two system consoles should run the DHCP, DNS,  
BOOTP, FTP, and TFTP servers. These services should not be running on other consoles in the  
same maintenance LAN.  
Managing Multiple Systems Using One or Two System Consoles  
If you want to manage more than one system from a console (or from a fault-tolerant pair of  
consoles), you can daisy chain the maintenance switches together. This configuration requires  
an IP address scheme to support it. Contact your HP service provider to design this configuration.  
Cascading Ethernet Switch or Hub Configuration  
Additional Ethernet switches or hubs can be connected (cascaded) to the maintenance switches  
already installed. Primary and backup system consoles and the server must be on the same  
subnet.  
You must change the preconfigured IP addresses of the second and subsequent system consoles  
before you can add them to the LAN.  
System Consoles  
91  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
92  
Download from Www.Somanuals.com. All Manuals Search And Download.  
A Cables  
Cable Types, Connectors, Lengths, and Product IDs  
Available cables and their lengths are:  
Cable Type  
MMF  
Connectors  
LC-LC  
Length (meters)  
Length (feet)  
Product ID  
N.A.  
.24  
2.5  
10  
15  
30  
50  
1.5  
5
.79  
8
MMF  
MTP-LC  
M8941-02  
M8941-10  
M8941-15  
M8941-30  
M8941-50  
M8925-01  
M8925-05  
M8925-10  
M8925-30  
M8925-50  
M8925-100  
M8905-01  
M8905-02  
M8905-04  
M8905-06  
M8906-02  
M8906-04  
M8906-06  
M8926-05  
M8926-10  
M8926-15  
M8926-25  
33  
49  
98  
164  
5
MMF  
MTP-MTP  
16  
33  
98  
164  
328  
3
10  
30  
50  
100  
1
SAS to mini SAS  
cables  
SFF-8470 to SFF-8088  
2
7
4
13  
20  
7
6
SAS to SAS cables  
CAT-5 Ethernet  
SFF-8088 to SFF-8088  
RJ-45  
2
4
13  
49  
5
6
1.5  
3
10  
15  
25  
4.6  
7.7  
Cable Types, Connectors, Lengths, and Product IDs  
93  
Download from Www.Somanuals.com. All Manuals Search And Download.  
     
NOTE: ServerNet cluster connections on NonStop BladeSystems follow the ServerNet cluster  
and cable length rules and restrictions. For more information, see these manuals:  
ServerNet Cluster Supplement for NonStop BladeSystems  
For 6770 switches and star topologies: ServerNet Cluster Manual  
For 6780 switches and layered topology: ServerNet Cluster 6780 Planning and Installation Guide  
Cable Length Restrictions  
Maximum allowable lengths of cables connecting the modular system components are:  
Connection  
Fiber Type  
Connectors  
Maximum Length  
Product ID  
From c7000 enclosure MTP  
to c7000 enclosure  
(interconnection)  
MTP-MTP  
100 m  
M8925nn  
From c7000 ServerNet MMF  
switch to c7000  
ServerNet switch  
(cross-link  
LC-LC  
.24 m  
N.A.  
connection)  
From c7000 enclosure MTP  
to CLIM  
MTP-LC  
50 m  
M8941nn  
M8905nn  
From Storage CLIM MMF  
SAS HBA port to SAS  
disk enclosure  
SFF-8470 to SFF-8088 6 m  
SFF-8470 to SFF-8088 6 m  
SFF-8088 to SFF-8088 6 m  
From Storage CLIM MMF  
SAS HBA port to SAS  
tape  
M8905nn  
M8906nn  
From SAS disk  
enclosure to SAS disk  
enclosure  
N.A.  
From Storage CLIM MMF  
FC port to ESS  
LC-LC  
LC-LC  
250 m  
250 m  
M8900nn  
M8900nn  
Storage CLIM FC port MMF  
to FC tape  
Although a considerable distance can exist between the modular enclosures in the system, HP  
recommends placing all cabinets adjacent to each other and bolting them together, with cable  
length between each of the enclosures as short as possible.  
94  
Cables  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
B Operations and Management Using OSM Applications  
OSM client-based components are installed on new system console shipments and also delivered  
by an OSM installer on the HP NonStop System Console (NSC) Installer DVD image. The NSC  
DVD image also delivers all other client software required for managing and servicing NonStop  
servers. For installation instructions, see the NonStop System Console Installer Guide.  
OSM server-based components are incorporated in a single OSM server-based SPR, T0682 (OSM  
Service Connection Suite), that is installed on NonStop BladeSystems running the HP NonStop  
operating system.  
For information on how to install, configure and start OSM server-based processes and  
components, see the OSM Migration and Configuration Guide. The OSM components are:  
Product ID  
T0632  
Component  
Task Performed  
OSM Notification Director  
OSM Low-Level Link  
Dial-in and dial-out services  
T0633  
Provides down-system support  
Provides support to configure IP  
CLIMs and Storage CLIMs before  
they are operational in a NonStop  
BladeSystem  
Provides IP CLIM and Storage  
CLIM software updates  
T0634  
OSM Console Tools  
OSM Certificate Tool  
Provides Start menu shortcuts and  
default home pages for easy access to  
the OSM Service Connection and OSM  
Event Viewer (browser-based OSM  
applications that are not installed on  
the system console)  
Establishes certificate-based trust  
between the OSM server and the  
Onboard Administrators in a c7000  
enclosure  
OSM System Inventory Tool  
Retrieves hardware inventory from  
multiple NonStop BladeSystems  
Terminal Emulator File Converter  
Converts existing OSM Service  
Connection-related OutsideView (.cps)  
session files to MR-WIN6530 (.653)  
session files  
System-Down OSM Low-Level Link  
In NonStop BladeSystems, the maintenance entity (ME) in the c7000 ServerNet switch or IOAM  
enclosures provides dedicated service LAN services via the OSM Low-Level Link for both OS  
coldload, system management, and hardware configuration when hardware is powered up but  
the OS is not running.  
AC Power Monitoring  
NonStop BladeSystems require one of the following to support system operation through power  
transients or an orderly shutdown of I/O operations and processors during a power failure:  
The optional, HP-supported model R12000/3 UPS (with one to four ERMs for additional  
battery power)  
A user-supplied UPS installed in each modular cabinet  
A user-supplied site UPS  
System-Down OSM Low-Level Link  
95  
Download from Www.Somanuals.com. All Manuals Search And Download.  
                     
If the HP R12000/3 UPS is installed, it is connected to the systems dedicated service LAN via  
the maintenance switch where OSM monitors the power state of either AC onor AC off.  
For OSM to provide AC power fail support, an HP R12000/3 UPS must be installed, connected  
to the system's dedicated service LAN via the maintenance switch and configured as described  
in the NonStop BladeSystems Hardware Installation Manual.  
Then, you must perform these actions in the OSM Service Connection:  
Configure a Power Source as AC, located under Enclosure 100, to configure the power rail  
(either A or B) connected to AC power.  
Configure a Power Source as UPS, located under Enclosure 100, to configure the power  
rail (either A or B) connected to the UPS. While performing this action, you must enter the  
IP address of the UPS.  
(Optional/recommended) Verify Power Fail Configuration, located under the system object,  
to verify that power fail support has been properly configured and is in place for the NonStop  
BladeSystem.  
If a power outage occurs, OSM starts a ride-through timer and outputs an EMS notification that  
the system is running on the UPS batteries. The ride-through timer can be used to let the system  
continue operation for a short period in case the power outage was only a momentary transient.  
The ERMs installed in each cabinet can extend the battery-supported system runtime.  
The system user must use SCF to configure the system ride-through time to execute an orderly  
shutdown before the UPS batteries are depleted. The time available for battery support depends  
on the charge in the batteries and the power that the system draws.  
Additionally, if the sites air conditioning shuts down in a power failure, the system should be  
shut down before its internal air temperatures can rise to the point that initiates a thermal  
shutdown. A timely and orderly shutdown prevents an uncontrolled and asymmetric shutdown  
of the system resources from depleted UPS batteries or thermal shutdown.  
If a user-supplied rack-mounted UPS or a site UPS is used rather than the HP-supported model  
R12000/3 UPS, the system is not notified of the power outage. The user is responsible for detecting  
power transients and outages and developing the appropriate actions, which might include a  
ride-through time based on the capacity of the site UPS and the power demands made on that  
UPS.  
The R12000/3 UPS and ERM installed in modular cabinets do not support any devices that are  
external to the cabinets. External devices can include tape drives, external disk drives, LAN  
routers, and SWAN concentrators. Any external peripheral devices that do not have UPS support  
will fail immediately at the onset of a power failure. Plan for UPS support of any external  
peripheral devices that must remain operational as system resources. This support can come  
from a site UPS or individual units as necessary.  
This information relates to handling power failures:  
For ride-through time, see the SCF Reference Manual for the Kernel Subsystem.  
For the TACL SETTIME command, see the TACL Reference Manual.  
To set system time programmatically, see the Guardian Procedure Calls Reference Manual.  
96  
Operations and Management Using OSM Applications  
Download from Www.Somanuals.com. All Manuals Search And Download.  
AC Power-Fail States  
These states occur when a power failure occurs and an optional HP model R12000/3 UPS is  
installed in each cabinet within the system:  
System State  
Description  
NSK_RUNNING  
RIDE_THRU  
NonStop operating system is running normally.  
OSM has detected a power failure and begins timing the  
outage. AC power returning terminates RIDE_THRU and  
puts the operating system back into an NSK_RUNNING  
state. At the end of the predetermined RIDE_THRU time,  
if AC has not returned, OSM executes a PFAIL_SHOUT  
and initiates an orderly shutdown of I/O operations and  
resources.  
HALTED  
Normal halt condition. Halted processors do not  
participate in power-fail handling. A normal power-on  
also puts the processors into the HALTED state.  
POWER_OFF  
Loss of optic power from the NonStop Server Blade  
occurs, or the UPS batteries suppling the server blade are  
completely depleted. When power returns, the system is  
essentially in a cold-boot condition.  
AC Power-Fail States  
97  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
98  
Download from Www.Somanuals.com. All Manuals Search And Download.  
C Default Startup Characteristics  
Each NonStop BladeSystem ships with these default startup characteristics:  
$SYSTEM disks residing in either SAS disk enclosures or FCDM enclosures:  
SAS Disk Enclosures  
Systems with only two to three Storage CLIMs and two SAS disk enclosures with the  
disks in these locations:  
CLIM X1 Location  
SAS Disk Enclosure  
Path  
Group  
100  
Module  
Slot  
5
Enclosure  
Bay  
1
Primary  
Backup  
Mirror  
2
2
2
2
1
3
3
2
100  
5
3
100  
5
3
Mirror-Backup 100  
5
1
Systems with at least four Storage CLIMs and two SAS disk enclosures with the disks  
in these locations:  
CLIM X1 Location  
SAS Disk Enclosure  
Path  
Group  
100  
Module  
Slot  
5
Enclosure  
Bay  
1
Primary  
Backup  
Mirror  
2
2
2
2
1
1
4
3
100  
5
1
100  
5
3
Mirror-Backup 100  
5
3
FCDM Enclosures  
Systems with one IOAM enclosure, two FCDMs, and two FCSAs with the disks in these  
locations:  
IOAM  
Group  
110  
FCSA  
Fibre Channel Disk Module  
Path  
Module  
Slot  
1
SAC  
Shelf  
Bay  
1
Primary  
Backup  
Mirror  
2
3
3
2
1
1
2
2
1
1
1
1
110  
1
1
110  
1
1
Mirror-Backup 110  
1
1
Systems with two IOAM enclosures, two FCDMs, and two FCSAs with the disks in  
these locations:  
IOAM  
Group  
110  
FCSA  
Slot  
1
Fibre Channel Disk Module  
Path  
Module  
2
SAC  
1
Shelf  
1
Bay  
1
Primary  
99  
Download from Www.Somanuals.com. All Manuals Search And Download.  
   
Backup  
Mirror  
111  
111  
2
2
2
1
1
1
1
2
2
1
1
1
1
1
1
Mirror-Backup 110  
Systems with one IOAM enclosure, two FCDMs, and four FCSAs with the disks in these  
locations:  
IOAM  
Group  
110  
FCSA  
Fibre Channel Disk Module  
Path  
Module  
Slot  
1
SAC  
Shelf  
Bay  
1
Primary  
Backup  
Mirror  
2
3
3
2
1
1
2
2
1
1
1
1
110  
1
1
110  
2
1
Mirror-Backup 110  
2
1
Systems with two IOAM enclosures, two FCDMs, and four FCSAs with the disks in  
these locations:  
IOAM  
Group  
110  
FCSA  
Fibre Channel Disk Module  
Path  
Module  
Slot  
1
SAC  
Shelf  
Bay  
1
Primary  
Backup  
Mirror  
2
2
3
3
1
1
2
2
1
1
1
1
111  
1
1
111  
1
1
Mirror-Backup 110  
1
1
Configured system load paths  
Enabled command interpreter input (CIIN) function  
If the automatic system load is not successful, additional paths for loading are available in the  
boot task. Using one load path, the system load task attempts to use another path and keeps  
trying until all possible paths have been used or the system load is successful. These 16 paths  
are available for loading and are listed in the order of their use by the system load task:  
Load Path  
Description  
Primary  
Source Disk  
$SYSTEM-P  
$SYSTEM-P  
$SYSTEM-P  
$SYSTEM-P  
$SYSTEM-M  
$SYSTEM-M  
$SYSTEM-M  
$SYSTEM-M  
$SYSTEM-P  
$SYSTEM-P  
Destination Processor ServerNet Fabric  
1
2
3
4
5
6
7
8
9
10  
0
0
0
0
0
0
0
0
1
1
X
Y
X
Y
X
Y
X
Y
X
Y
Primary  
Backup  
Backup  
Mirror  
Mirror  
Mirror-Backup  
Mirror-Backup  
Primary  
Primary  
100 Default Startup Characteristics  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Load Path  
Description  
Backup  
Source Disk  
$SYSTEM-P  
$SYSTEM-P  
$SYSTEM-M  
$SYSTEM-M  
$SYSTEM-M  
$SYSTEM-M  
Destination Processor ServerNet Fabric  
11  
12  
13  
14  
15  
16  
1
1
1
1
1
1
X
Y
X
Y
X
Y
Backup  
Mirror  
Mirror  
Mirror-Backup  
Mirror-Backup  
The command interpreter input file (CIIN) is automatically invoked after the first processor is  
loaded. The CIIN file shipped with new systems contains the TACL RELOAD * command, which  
loads the remaining processors.  
For default configurations of the Fibre Channel ports, Fibre Channel disk modules, and load  
(page 63). For default configurations of the HBA SAS ports, SAS disk enclosures, and load disks,  
101  
Download from Www.Somanuals.com. All Manuals Search And Download.  
102  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Index  
Cluster switch  
Symbols  
cable length restrictions, 54  
Configuration considerations  
Fibre Channel devices, 62  
maximum number of enclosures, 77  
Configuration examples, 77  
Configuration restrictions, Storage CLIM, 58  
Configuration, factory-installed hardware documentation,  
$SYSTEM disk locations, 99  
A
AC current calculations, 51  
AC Input Module, 37  
AC power  
208 V AC 3-phase delta 24A RMS, 44  
400 V AC 3-phase delta 16A RMS, 44  
enclosure input specifications, 45  
input, 44  
Connections  
basic LAN configuration, 81  
c7000s, between, 55  
power-fail monitoring, 95  
power-fail states, 97  
AC power feed, 42, 44  
bottom of cabinet, 43  
top of cabinet, 43  
CLIM, 56  
cluster switch, 54  
cross-link ServerNet fabric, 55  
fault-tolerant LAN configuration, 83  
FCDMs, 56  
Administrator  
FCSA to FCDM, 63  
IOAM enclosure, 56  
passwords, changing, 71  
Air conditioning, 33  
Air filters, 34  
IP CLIM, Ethernet, 70  
LAN using G4SAs, 88  
LAN using IP CLIMs, 89  
SAS disk enclosure, 57  
ServerNet switches in c7000 enclosure, 55  
supported, 54  
B
Blanking panels, 51  
Branch circuit, 44  
tape (Fibre), 57  
C
tape (SAS), 57  
c7000 enclosure  
Cooling, 33  
connections between c7000s, 55  
features, 18  
assessment, 33  
LEDs, 18  
D
location in cabinet, 78  
overview, 17  
Dedicated service LAN, 81  
Default disk drive locations  
FCDM, 61  
Cabinet, 25  
dimensions, 47  
SAS disk enclosures, 58  
example of load calculations, 52  
modular 42U, 37  
Default naming conventions, 73  
Default startup characteristics, 99  
Dimensions, 47  
plan view of, 47  
Cable length restrictions, 53, 94  
Cable product IDs, 93  
Cables, supported, 93  
Calculation  
enclosures, 48  
modular cabinet, 47  
service clearances, 47  
Disk drives  
heat, 34, 50  
weight, 34, 49  
Calculation, heat, 34, 50  
Calculation, weight, 34, 49  
Clearances, service, 47  
CLIM  
configuration recommendations (FCDM), 62  
daisy-chain recommendations (FCDM), 63  
default disk drive locations, SAS disk enclosures, 58  
FCDM default disk drive locations, 61  
SAS disk enclosure, bay locations, 57  
SAS disk enclosure, IO modules, 57  
Documentation  
IP CLIM Ethernet connections, 70  
IP CLIM ports , 70  
packet, 30  
ServerNet adapter configuration, 30  
Dust and microscopic particles, 34  
password for iLO, 72  
password for Maintenance Interface (eth01), 72  
password for server blade iLO (MP), 72  
restrictions (Storage CLIM), 58  
storage configurations, 58  
CLIM, connections, 56  
E
Electrical disturbances, 32  
Electrical power loading, 46  
103  
Download from Www.Somanuals.com. All Manuals Search And Download.  
 
Electrostatic immunity  
tested, 51  
Group, 25  
Group-module-slot (GMS), 25  
Group-module-slot-bay (GMSB), 25  
Group-module-slot-port (GMSP), 25  
Group-module-slot-port-fiber (GMSPF), 25  
Emergency power off (EPO) switches  
HP R12000/3 UPS, 31  
NonStop BladeSystems, 31  
Empty slots (see Blanking panels)  
Enclosure  
H
combinations, 17  
Hardware configuration  
typical, 78  
dimensions, 48  
height in U, 47  
Heat calculation, 34, 50  
Heat dissipation,Btu/hour, enclosures, 50  
Height in U, enclosures, 47  
Hot spots, 33  
location, 77  
maximum number, 77  
power loading, 46  
weight, 49  
Humidity, 33  
Enterprise Storage System (ESS), 22  
LUN, 75  
I
Environmental monitoring unit, 20  
EPO switches, 31  
Input power, 44  
Inrush current, 32  
Installation  
ERM, 21  
Example system  
specifications, 37  
8-processor, 16  
Integrity NonStop NB50000c BladeSystem characteristics,  
Example, IOAM and disk drive enclosure, 63  
Extended runtime module (ERM) (see ERM)  
IOAM  
location in cabinet, 78  
IOAM enclosure  
F
Factory-installed hardware, documentation, 30  
FC-AL configuration recommendations, 62  
FCDM  
GMS numbering, 27  
overview, 20  
IOAM enclosure, connections, 56  
IP addresses  
connecting. ports, 56  
illustration of bays, 61  
location in cabinet, 78  
components connected to LAN, 84  
IP CLIM  
FCSA, 60  
Ethernet configurations, 19  
location in cabinet, 78  
overview, 19  
configuration recommendations, 62  
Fiber, 25  
Fibre Channel arbitrated loop (FC-AL), 20, 61  
Fibre Channel device considerations, 62  
Fibre Channel devices, 60  
configuration restrictions, 62  
Fibre Channel disk module, 60  
Fibre Channel disk module (FCDM)  
overview, 20  
service LAN, 89  
L
LAN  
fault-tolerant configuration, 83  
non-fault-tolerant configuration, 81  
service, G4SA PIF, 89  
service, IP CLIM, 89  
Load operating system paths, 99  
Fibre tape  
connecting, ports, 57  
Flooring, 34  
Forms for ServerNet adapter configuration, 30  
Fuses, PDU, 43  
M
Maintenance switch  
BladeSystem connections to, 21  
CLIM connections to, 21  
IOAM connections to, 21  
location in cabinet, 78  
overview, 20  
G
G4SA  
network connections, 70  
service LAN PIF, 89  
GMS  
c7000 enclosure, 26  
Fibre Channel disk module, 29  
IOAM enclosure, 27  
Server blade, 27  
GMSF  
CLIM enclosure, 27  
Grounding, 32  
Management Tools for NonStop BladeSystems, 23  
Metallic particulate contamination, 34  
Mirror and primary disk drive location recommendations,  
Models, system, 15  
Modular cabinet  
physical specifications, 48  
weight, 49  
104 Index  
Download from Www.Somanuals.com. All Manuals Search And Download.  
INTL with UPS, 40  
INTL without UPS, 41  
NA/JPN with UPS, 38  
NA/JPN without UPS, 39  
NonStop BladeSystem, 38  
Power feed, top or bottom, 31, 44  
Power input, 44  
N
Naming conventions, 73  
NB50000c BladeSystem  
characteristics, 15  
Noise emissions, 51  
NonStop BladeSystem  
characteristics, 15  
Power quality, 31  
components, 17  
management tools, 23  
overview, 15  
Power receptacles, PDU, 44  
Power-fail  
monitoring, 95  
states, 97  
phase load balancing, 45  
power feed setup, 38  
NonStop Multicore Architecture (NSMA)  
overview, 16  
NonStop Server Blade, 25  
overview, 19  
NSMA (see NonStop multiprocessor architecture)  
Primary and mirror disk drive location recommendations,  
R
R12000/3 UPS, 21  
Rack, 25  
Rack offset, 25, 26  
Raised flooring, 34  
Receiving and unpacking space, 34  
Receptacles, PDU, 44  
Remote Desktop  
O
Onboard Administrator  
password, 72  
Operating system load paths, 99  
Operational space, 35  
OSM, 90, 95  
password for, 72  
Restrictions  
description of, 24  
cable length, 53, 94  
Fibre Channel device configuration, 62  
OSM Certificate Tool, 95  
OSM Console Tools, 95  
OSM Low-Level Link, 95  
OSM Notification Director, 95  
OSM System Inventory Tool, 95  
OutsideView, converting files, 95  
S
Safety ground/protective earth, 32  
SAS disk enclosure  
bay locations, 57  
connecting, 57  
P
front and back view, 57  
location in cabinet, 78  
LUN, 75  
Particulates, metallic, 34  
Password  
changing for CLIM iLO , 72  
changing for CLIM Maintenance Interface (eth01), 72  
changing for Onboard Administrator (OA), 72  
changing for Remote Desktop, 72  
changing for server blade iLO (MP), 72  
Passwords, changing, 71  
Passwords, default, 71  
Paths, operating system load, 99  
PDU  
overview, 20  
SAS Tape  
connecting, 57  
Server blade, 25  
ServerNet cluster switch  
connections, 54  
ServerNet switch  
cross-connections, 55  
High I/O configuration, 55  
Standard I/O configuration, 55  
ServerNet switch, connection types, 54  
ServerNet switches in c7000  
Standard I/O and High I/O configurations, 26  
ServerNet switches in c7000 enclosure  
types, 55  
AC power feed, 42  
description, 42  
fuses, 43  
receptacles, 44  
PDU, International , 44  
PDU, North America and Japan, 44  
PDUs, 42  
Phase Load Balancing, 45  
Port, 25  
Power and thermal calculations, 51  
Power configurations, 37  
Power consumption, 32  
Power distribution, 37  
Power distribution units (PDUs), 31, 42, 44  
Power feed setup  
Service clearances, 47  
Service LAN, 81  
Slot, bay, position, 25  
Specifications  
assumptions, 37  
cabinet physical, 48  
enclosure dimensions, 48  
heat, 50  
105  
Download from Www.Somanuals.com. All Manuals Search And Download.  
nonoperating temperature, humidity, altitude, 51  
operating temperature, humidity, altitude, 50  
weight, 49  
Startup characteristics, default, 99  
Storage CLIM  
HBA slots, 19  
location in cabinet, 78  
overview, 19  
Storage CLIM, illustration of ports and HBAs, 57  
SWAN concentrator restriction, 88  
System console  
configurations, 90  
description, 81  
location in cabinet, 78  
overview, 21  
System disk location, 99  
T
Tape drives, 23  
Terminal Emulator File Converter, 95  
Terminology, 25  
Tools  
CIP Subsystem, 24  
Integrated Lights Out (iLO), 24  
Onboard Administrator (OA), 24  
OSM, 24  
SCF Subsystem, 24  
U
U height, enclosures, 47  
Uninterruptible power supply (UPS), 21, 32  
UPS  
HP R12000/3, 21, 32, 45  
input rating, 45  
user-supplied rack-mounted, 33  
user-supplied site, 33  
V
Virtual tape  
LUN, 75  
W
Weight calculation, 34, 49  
Weights, 47  
Worksheet  
heat calculation, 50  
weight calculation, 49  
Z
Zinc, cadmium, or tin particulates, 34  
106 Index  
Download from Www.Somanuals.com. All Manuals Search And Download.  
107  
Download from Www.Somanuals.com. All Manuals Search And Download.  
Download from Www.Somanuals.com. All Manuals Search And Download.  

Graco Pressure Washer 3A0465C User Manual
Harbor Freight Tools Sander 68152 User Manual
Honeywell Automobile Parts C7150B User Manual
Honeywell Thermostat CT8602 User Manual
Hoshizaki Refrigerator 73183 User Manual
Hotpoint Clothes Dryer NVLR333ET User Manual
HP Hewlett Packard Satellite Radio HP 4945A User Manual
Humminbird Fish Finder Wide 3D User Manual
iHome Universal Remote RZ2 User Manual
IKEA Indoor Furnishings AA 180832 1 User Manual